diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING .rar The Ultimate Solution for Disk Defragmentation.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING .rar The Ultimate Solution for Disk Defragmentation.md deleted file mode 100644 index 4043e71ee1c0958afdca500726585e49aad85596..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING .rar The Ultimate Solution for Disk Defragmentation.md +++ /dev/null @@ -1,101 +0,0 @@ -
-

Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING .rar

-

Introduction

-

If you are looking for a way to optimize your computer's performance and prevent disk fragmentation, you might be interested in Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING .rar. This is a compressed file that contains the installation files for Diskeeper PREMIER EDITION 12.0, a powerful disk defragmentation software that can improve your system speed, reliability, and efficiency.

-

Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING .rar


Download File ►►►►► https://byltly.com/2uKxWk



-

In this article, I will explain what Diskeeper PREMIER EDITION 12.0 is, how it works, what are its features and benefits, and how to download and install it from the .rar file. I will also answer some frequently asked questions about this software and provide some tips and tricks to get the most out of it.

-

How does Diskeeper PREMIER EDITION 12.0 work?

-

Diskeeper PREMIER EDITION 12.0 works by using two main methods to optimize your disk performance: defragmentation and free space consolidation.

-

Defragmentation

-

Defragmentation is the process of rearranging the files on your disk so that they are stored in contiguous blocks, making them easier and faster to access. Diskeeper PREMIER EDITION 12.0 can defragment your files in real-time, as soon as they are created or modified, or on a scheduled basis, depending on your preferences.

-

Free space consolidation

-

Free space consolidation is the process of combining the free space on your disk into larger chunks, making it easier for new files to be written without fragmentation. Diskeeper PREMIER EDITION 12.0 can consolidate your free space in the background, without affecting your system performance or requiring a reboot.

-

What are the features and benefits of Diskeeper PREMIER EDITION 12.0?

-

Diskeeper PREMIER EDITION 12.0 has many features and benefits that make it a superior disk defragmentation software. Some of them are:

- -

How to download and install Diskeeper PREMIER EDITION 12.0 from the .rar file?

-

To download and install Diskeeper PREMIER EDITION 12.0 from the .rar file, you need to follow these steps:

-
    -
  1. Download the .rar file from one of the web search results. Make sure you have enough disk space to store the file.
  2. -
  3. Extract the .rar file using a software like WinRAR or 7-Zip. You will get a folder containing the installation files for Diskeeper PREMIER EDITION 12.0.
  4. -
  5. Run the setup.exe file as an administrator and follow the instructions on the screen. You will need to accept the license agreement, choose the installation location, select the components you want to install, etc.
  6. -
  7. After the installation is complete, you will need to restart your computer for the changes to take effect.
  8. -
  9. Launch Diskeeper PREMIER EDITION 12.0 from the Start menu or the desktop shortcut and enjoy its features.
  10. -
-

Conclusion

-

Diskeeper PREMIER EDITION 12.0 is a powerful disk defragmentation software that can improve your system performance, reliability, and efficiency by preventing fragmentation and optimizing your disk space. It can run in the background without affecting your system resources or interfering with your work. It can also monitor your disk health and alert you of any potential problems or failures.

-

If you want to download and install Diskeeper PREMIER EDITION 12.0 from the .rar file, you need to follow the steps explained above. You will need a software like WinRAR or 7-Zip to extract the .rar file and then run the setup.exe file as an administrator.

-

Diskeeper PREMIER EDITION 12.0 crack download
-How to install Diskeeper PREMIER EDITION 12.0Build 758
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING torrent
-Diskeeper PREMIER EDITION 12.0 serial key generator
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING free download
-Diskeeper PREMIER EDITION 12.0 license key activation
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING full version
-Diskeeper PREMIER EDITION 12.0 review and features
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING patch
-Diskeeper PREMIER EDITION 12.0 system requirements and compatibility
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING direct link
-Diskeeper PREMIER EDITION 12.0 user manual and guide
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING mega.nz
-Diskeeper PREMIER EDITION 12.0 alternative software
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING rar password
-Diskeeper PREMIER EDITION 12.0 discount code and coupon
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING mediafire.com
-Diskeeper PREMIER EDITION 12.0 pros and cons
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING zip file
-Diskeeper PREMIER EDITION 12.0 customer support and feedback
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING google drive
-Diskeeper PREMIER EDITION 12.0 comparison with other software
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING online installer
-Diskeeper PREMIER EDITION 12.0 tips and tricks
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING zippyshare.com
-Diskeeper PREMIER EDITION 12.0 benefits and drawbacks
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING no survey
-Diskeeper PREMIER EDITION 12.0 tutorial and video
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING dropbox.com
-Diskeeper PREMIER EDITION 12.0 testimonials and ratings
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING updated version
-Diskeeper PREMIER EDITION 12.0 FAQs and answers
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING safe and secure
-Diskeeper PREMIER EDITION 12.0 best practices and recommendations
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING virus scan report
-Diskeeper PREMIER EDITION 12.0 troubleshooting and solutions
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING mirror link
-Diskeeper PREMIER EDITION 12.0 case studies and examples
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING verified download
-Diskeeper PREMIER EDITION 12.0 refund policy and guarantee
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING modded apk
-Diskeeper PREMIER EDITION 12.0 awards and recognition
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING unlimited access
-Diskeeper PREMIER EDITION 12.0 blog posts and articles
-Diskeeper PREMIER EDITION 12.0Build 758.FINAL WORKING bonus content and extras
-Diskeeper PREMIER EDITION 12.0 forum discussions and comments
-Diskeeper PREMIER EDITION

-

I hope this article has helped you understand what Diskeeper PREMIER EDITION 12.0 is, how it works, what are its features and benefits, and how to download and install it from the .rar file.

-

FAQs

-

Here are some frequently asked questions about Diskeeper PREMIER EDITION 12.0:

-

Q: How much does Diskeeper PREMIER EDITION 12.0 cost?

-

A: Diskeeper PREMIER EDITION 12.0 is not a free software. You need to purchase a license key to activate it after installing it from the .rar file.

-

Q: How do I activate Diskeeper PREMIER EDITION 12.0?

-

A: You need to enter your license key in the activation window that appears when you launch Diskeeper PREMIER EDITION 12.0 for the first time. You can also access the activation window from the Help menu.

-

Q: How do I customize Diskeeper PREMIER EDITION 12.0 settings?

-

A: You can access Diskeeper PREMIER EDITION 12.0 settings from the Dashboard menu or by right-clicking on its icon in the system tray. You can change various options such as defragmentation mode, schedule, performance settings, alerts settings, etc.

-

Q: How do I check my disk performance and health?

-

A: You can check your disk performance and health from the Reports menu or by clicking on the Analyze button in Diskeeper PREMIER EDITION 12.0 interface. You can view various statistics such as fragmentation level, free space level, disk temperature, disk age, etc.

-

Q: How do I uninstall Diskeeper PREMIER EDITION 12.0?

-

A: You can uninstall Diskeeper PREMIER EDITION 12.0 from the Control Panel or by running the uninstall.exe file in its installation folder.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Glwiz Token Code.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Glwiz Token Code.md deleted file mode 100644 index a96142b90527cb1a3e89f4ef092389ffa25f3341..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Glwiz Token Code.md +++ /dev/null @@ -1,51 +0,0 @@ - -

How to Watch Live TV and On-Demand Shows with GLWiZ Token Code

- -

If you are looking for a way to enjoy multicultural programming from around the world, you might want to check out GLWiZ. GLWiZ is a web-based application that allows you to watch live TV, on-demand movies and series, and radio channels from various countries and languages. You can access GLWiZ on your smart TV, Android device, or Apple TV with a simple token code.

- -

A token code is a unique number that you get when you purchase a GLWiZ app recharge card. You can use this token code to register and activate your GLWiZ account on your device. You can also use it to renew your subscription if you already have an account.

-

Glwiz Token Code


Download Zip ··· https://byltly.com/2uKuY7



- -

In this article, we will show you how to get and use a GLWiZ token code to enjoy unlimited entertainment on your screen.

- -

How to Get a GLWiZ Token Code

- -

There are two ways to get a GLWiZ token code: online or offline.

- -

If you want to buy a token code online, you can visit the official website of GLWiZ and click on the "Buy Token Code" button. You will be redirected to a secure payment page where you can choose your preferred platform (smart TV, Android, or Apple TV) and subscription plan (monthly or yearly). You can pay with your credit card or PayPal account. After completing the payment, you will receive an email with your token code and instructions on how to use it.

- -

If you want to buy a token code offline, you can look for a GLWiZ app recharge card at your local store or retailer. A recharge card is a physical card that has a PIN number and a token code printed on the back. You can scratch off the protective layer to reveal the numbers. You can also call the customer service number on the card to get your token code over the phone.

- -

How to Use a GLWiZ Token Code

- -

Once you have your token code, you can use it to register and activate your GLWiZ account on your device. Here are the steps to follow:

- -
    -
  1. Download and install the GLWiZ app on your device from the Google Play Store or the App Store.
  2. -
  3. Open the app and go to "My Account". Note down the token code that is displayed on the screen.
  4. -
  5. Go to www.glwiz.com and click on "Register". Choose "GLWiZ App Recharge Card" as your payment method and click on "Continue".
  6. -
  7. Enter the token code that you noted down from the app and click on "Verify".
  8. -
  9. Enter the PIN number that you got from your recharge card or email and click on "Verify".
  10. -
  11. You will see a confirmation message that your account has been activated. You can now enjoy watching live TV and on-demand shows on your device.
  12. -
- -

If you already have an account and want to renew your subscription, you can follow these steps:

-

- -
    -
  1. Go to www.glwiz.com/renew/smarttv and log in with your credentials.
  2. -
  3. Click on "Details Platform and Subscription" and choose your platform and plan.
  4. -
  5. Click on "Renew Subscription" and choose "Pay with GLWiZ App Recharge Card" as your payment method.
  6. -
  7. Enter the PIN number that you got from your recharge card or email and click on "Verify".
  8. -
  9. You will see a confirmation message that your subscription has been renewed. You can continue watching live TV and on-demand shows on your device.
  10. -
- -

Conclusion

- -

GLWiZ is a great way to watch live TV and on-demand shows from different countries and languages. You can access it on your smart TV, Android device, or Apple TV with a simple token code. You can get a token code online or offline and use it to register and activate your account. You can also use it to renew your subscription if you already have an account.

- -

If you need more information or assistance, you can contact the customer service of GLWiZ at +1 905.762.5037 or 1.866.236.2026 anytime. You can also chat with them live on their website.

- -

We hope this article has helped

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy San Andreas on Your PC with Apkpure - The Best Way to Play GTA Games.md b/spaces/1phancelerku/anime-remove-background/Enjoy San Andreas on Your PC with Apkpure - The Best Way to Play GTA Games.md deleted file mode 100644 index 888d630e69f51c736fcf6f92748cf37f5fc9cc67..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy San Andreas on Your PC with Apkpure - The Best Way to Play GTA Games.md +++ /dev/null @@ -1,102 +0,0 @@ -
-

San Andreas Download APKPure: How to Play the Classic GTA Game on Your Android Device

-

If you are a fan of the Grand Theft Auto (GTA) series, you probably have played or heard of San Andreas, one of the most popular and acclaimed titles in the franchise. Released in 2004 for PlayStation 2, Xbox, and PC, San Andreas is an action-adventure game that follows the story of Carl Johnson, a former gangster who returns to his hometown of Los Santos after his mother's death. There, he gets involved in a series of events that take him across the state of San Andreas, which is based on California and Nevada.

-

san andreas download apkpure


DOWNLOAD »»» https://jinyurl.com/2uNNeA



-

San Andreas is widely regarded as one of the best GTA games ever made, thanks to its engaging storyline, diverse gameplay, rich soundtrack, and huge open world. The game has sold over 27 million copies worldwide and has received numerous awards and accolades. It has also been ported to various platforms, including mobile devices.

-

If you want to play San Andreas on your Android device, you might be wondering how to do it. One of the easiest and safest ways is to download it from APKPure, a website and app that offers free and secure APK files for Android users. In this article, we will show you how to download and install San Andreas from APKPure, as well as some tips and tricks for playing it on your Android device.

-

Introduction

-

What is San Andreas?

-

San Andreas is the seventh main installment in the GTA series, developed by Rockstar North and published by Rockstar Games. It is set in 1992, in a fictionalized version of California and Nevada called San Andreas. The game follows the adventures of Carl Johnson (CJ), who returns to his hometown of Los Santos after five years of living in Liberty City. He soon finds out that his old gang, the Grove Street Families, has been weakened by drugs and corruption, and that his former friends and enemies are involved in a conspiracy that threatens his life and family.

-

The game features a nonlinear gameplay style that allows players to explore the three cities of Los Santos, San Fierro, and Las Venturas, as well as the rural areas and deserts of San Andreas. Players can also customize CJ's appearance, skills, weapons, vehicles, and properties. The game also offers a variety of missions, side quests, activities, minigames, collectibles, and secrets to discover. The game also has a multiplayer mode that supports up to two players on the same console.

-

What is APKPure?

-

APKPure is a website and app that provides free and safe APK files for Android users. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on an Android device. APK files are usually downloaded from the Google Play Store, but sometimes they are not available or compatible with certain devices or regions. That's where APKPure comes in handy.

-

APKPure offers a large collection of APK files for various apps and games, including popular ones like San Andreas. It also updates its files regularly to ensure that they are working properly and free from malware. Users can download APK files from APKPure's website or app, which also has other features like app management, update notification, file sharing, and more.

-

san andreas apk free download apkpure
-gta san andreas android download apkpure
-san andreas mod apk download apkpure
-gta san andreas free apk for android download apkpure
-san andreas full game download apkpure
-gta san andreas apk + data download apkpure
-san andreas cheats apk download apkpure
-gta san andreas lite apk download apkpure
-san andreas obb file download apkpure
-gta san andreas apk + obb download apkpure
-san andreas game download for android apkpure
-gta san andreas 200mb download apkpure
-san andreas cleo mod apk download apkpure
-gta san andreas highly compressed download apkpure
-san andreas offline apk download apkpure
-gta san andreas original apk download apkpure
-san andreas hack apk download apkpure
-gta san andreas 1.08 apk download apkpure
-san andreas unlimited money apk download apkpure
-gta san andreas 100 save game download apkpure
-san andreas latest version apk download apkpure
-gta san andreas zip file download apkpure
-san andreas multiplayer apk download apkpure
-gta san andreas 2.00 apk download apkpure
-san andreas graphics mod apk download apkpure
-gta san andreas 50mb download apkpure
-san andreas zombie mod apk download apkpure
-gta san andreas remastered apk download apkpure
-san andreas vip mod apk download apkpure
-gta san andreas 1.0.8 apk + data download apkpure
-san andreas real car mod apk download apkpure
-gta san andreas 1gb ram download apkpure
-san andreas bike mod apk download apkpure
-gta san andreas 4k graphics mod android download apkpure
-san andreas superman mod apk download apkpure
-gta san andreas 400mb highly compressed android download apkpure
-san andreas iron man mod apk download apkpure
-gta san andreas 1.0.7 apk + data download apkpure
-san andreas dragon ball z mod apk download apkpure
-gta san andreas 300mb highly compressed android download apkpure

- Why download San Andreas from APKPure? -

There are several reasons why you might want to download San Andreas from APKPure instead of the Google Play Store. Here are some of them:

- -

How to download and install San Andreas from APKPure

-

Downloading and installing San Andreas from APKPure is very easy and straightforward. Just follow these simple steps:

-

Step 1: Go to the APKPure website or app

-

You can access APKPure from your web browser or your Android device. If you use your web browser, go to https://apkpure.com/. If you use your Android device, download and install the APKPure app from https://apkpure.com/apkpure-app.html. Both options are safe and reliable.

-

Step 2: Search for San Andreas and tap on the download button

-

Once you are on the APKPure website or app, use the search bar to look for San Andreas. You should see a list of results that match your query. Tap on the one that says "Grand Theft Auto: San Andreas". You should see a page with more information about the game, such as its description, screenshots, ratings, reviews, and more. Tap on the green download button to start downloading the APK file.

-

Step 3: Enable unknown sources on your device settings

-

Before you can install the APK file, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings and look for security or privacy options. There, you should see an option that says "allow installation of apps from unknown sources" or something similar. Toggle it on and confirm your choice.

-

Step 4: Install the APK file and launch the game

-

After you have enabled unknown sources, go to your downloads folder and look for the APK file that you downloaded from APKPure. It should have a name like "com.rockstargames.gtasa.apk". Tap on it and follow the instructions to install it on your device. Once it is installed, you should see an icon for San Andreas on your home screen or app drawer. Tap on it and enjoy playing the game!

-

Tips and tricks for playing San Andreas on your Android device

-

Playing San Andreas on your Android device can be a lot of fun, but it can also be challenging at times. Here are some tips and tricks that can help you improve your gaming experience:

-

Use a controller or a keyboard for better control

-

San Andreas was originally designed for consoles and PCs, which have different control schemes than mobile devices. The game has been adapted to work with touchscreens, but some players might find it hard to control CJ's movements, actions, and camera angles with their fingers. If you want to have more precise and comfortable control over the game, you can use a controller or a keyboard that is compatible with your Android device. You can connect them via Bluetooth or USB and customize the buttons according to your preferences.

-

Adjust the graphics settings to optimize performance

-

San Andreas is a graphically intensive game that requires a lot of resources from your device. Depending on your device's specifications, you might experience lagging, crashing, or overheating issues while playing the game. To avoid these problems, you can adjust the graphics settings of the game to suit your device's capabilities. You can access these settings from the game's menu and change things like resolution, draw distance, shadows, reflections, frame rate, and more.

-

Save your progress frequently and use cheats if you want

-

San Andreas is a long and challenging game that can take hours to complete. You don't want to lose your progress or start over from the beginning if something goes wrong. That's why you should save your progress frequently and use cheats if you want. You can save your progress at any safe house that you own or rent, which are marked by floppy disk icons on the map. You can also use cheats to enhance your gameplay, such as getting more money, weapons, health, armor, vehicles, or changing the weather, time, or wanted level. You can find a list of cheats for San Andreas on https://www.gtaall.com/gta-san-andreas/cheats/. However, be careful when using cheats, as they might affect the game's stability or disable some achievements.

-

Explore the vast open world and enjoy the missions and activities

-

One of the best things about San Andreas is its vast open world that offers endless possibilities for exploration and fun. The game has a main storyline that consists of several missions that advance the plot and unlock new areas, characters, and features. However, you don't have to follow the main storyline if you don't want to. You can also enjoy many other missions and activities that are optional but rewarding. For example, you can join a gang and fight against rival gangs, work as a taxi driver, firefighter, paramedic, or vigilante, participate in races, stunts, or challenges, gamble at casinos, rob stores or houses, date different girlfriends, go to the gym or barber shop, play arcade games or pool, watch TV or movies, listen to radio stations or CDs, and much more. The game has so much content that you will never get bored.

-

Conclusion

-

Summary of the main points

-

In conclusion, San Andreas is one of the best GTA games ever made and one of the most enjoyable games to play on your Android device. You can download it for free from APKPure, which is a reliable and secure source of APK files for Android users. You can also follow our tips and tricks to optimize your gaming experience and have more fun with San Andreas.

-

Call to action and final remarks

-

If you are ready to play San Andreas on your Android device, don't wait any longer. Go to APKPure's website or app and download San Andreas today. You will not regret it. San Andreas is a classic game that will keep you entertained for hours with its amazing story, gameplay, soundtrack, and world. It is a game that every GTA fan and every Android gamer should play at least once in their life.

-

Thank you for reading this article. We hope you found it useful and informative. If you have any questions or comments about San Andreas or APKPure, feel free to leave them below. We would love to hear from you.

-

Frequently Asked Questions

-

Here are some of the most common questions that people have about San Andreas and APKPure:

-
    -
  1. Is San Andreas legal to download from APKPure?
  2. -

    Yes, it is legal to download San Andreas from APKPure as long as you own a legitimate copy of the game on another platform. APKPure does not host any pirated or cracked games on its website or app. It only provides APK files that are original and unmodified.

    -
  3. Is San Andreas safe to download from APKPure?
  4. -

    Yes, it is safe to download San Andreas from APKPure as long as you download it from the official website or app. APKPure scans all its files for viruses and malware before uploading them to its servers. It also verifies the authenticity and integrity of the files with cryptographic signatures.

    -
  5. How much space does San Andreas take on my device?
  6. -

    San Andreas takes about 2.6 GB of space on your device after installation. However, you will need more space to download the APK file and the additional data files that are required for the game to run properly.

    -
  7. Can I play San Andreas offline?
  8. -

    Yes, you can play San Andreas offline without any internet connection. However, you will need an internet connection to download the game from APKPure and to verify your license once every 30 days.

    -
  9. Can I play San Andreas with my friends?
  10. -

    Yes, you can play San Andreas with your friends if you have two Android devices that support Bluetooth or Wi-Fi connection. You can then use the multiplayer mode that allows up to two players to cooperate or compete in various modes and missions.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the Ocean Adventure with Hungry Shark Evolution MOD APK Download the New Update Now.md b/spaces/1phancelerku/anime-remove-background/Enjoy the Ocean Adventure with Hungry Shark Evolution MOD APK Download the New Update Now.md deleted file mode 100644 index 8a3d0993fb568db0d021e5022024ef51eb882d13..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy the Ocean Adventure with Hungry Shark Evolution MOD APK Download the New Update Now.md +++ /dev/null @@ -1,110 +0,0 @@ -
-

Download Hungry Shark Evolution New Update Mod Apk

-

Do you love playing shark games on your mobile device? Do you want to experience the thrill of being a hungry shark in a beautiful underwater world? Do you want to unlock more sharks, accessories, and features without spending real money? If you answered yes to any of these questions, then you should download Hungry Shark Evolution new update mod apk. In this article, we will tell you everything you need to know about this amazing game and how to get the latest modded version for free.

-

download hungry shark evolution new update mod apk


DOWNLOAD ✫✫✫ https://jinyurl.com/2uNSP1



-

What is Hungry Shark Evolution?

-

Hungry Shark Evolution is a popular action-adventure game developed by Ubisoft Entertainment. It is the official game for Shark Week, and it lets you take control of a very hungry shark and go on a frantic ocean rampage. You can explore an open world both above and below the waves, enjoy jawsome 3D graphics and sound effects, discover and devour mysterious creatures of the deep, recruit baby sharks to boost your predatory powers, equip awesome accessories like lasers, jetpacks, and top hats, find and collect sunken bonus objects, sink your teeth into loads of challenging missions, activate gold rush to survive longer and score higher, take part in regular in-game events and win limited edition prizes, attack with intuitive touch or tilt controls, play offline wherever you are – no Wi-Fi needed, synchronize your game easily across iOS devices, and more .

-

Features of Hungry Shark Evolution

-

Some of the main features of Hungry Shark Evolution are:

- -

How to play Hungry Shark Evolution

-

Playing Hungry Shark Evolution is very easy and fun. You just need to follow these simple steps:

-
    -
  1. Choose a shark from the evolution menu. You can unlock more sharks by earning coins or gems, or by completing certain missions or events.
  2. -
  3. Tap the play button to start the game. You will see your shark in the ocean, ready to eat anything that moves.
  4. -
  5. Use the virtual joystick on the left side of the screen to move your shark around. You can also tilt your device to steer your shark.
  6. -
  7. Use the boost button on the right side of the screen to make your shark swim faster and jump higher. Boosting consumes stamina, which regenerates over time.
  8. -
  9. Eat as many creatures as you can to fill up your hunger meter and score points. Avoid eating dangerous creatures like mines, jellyfish, or bigger sharks, as they will damage your health. You can also eat health kits to restore your health.
  10. -
  11. Fill up your gold rush meter by eating gold creatures. When it is full, tap the gold rush button to activate it and enjoy the benefits.
  12. -
  13. Complete missions and achievements to earn extra coins, gems, and rewards. You can check your progress in the pause menu.
  14. -
  15. When you die, you will see your final score and stats. You can also watch a video ad to revive your shark once per game.
  16. -
  17. Use your coins and gems to upgrade your shark's abilities, buy accessories and gadgets, recruit baby sharks, or unlock new sharks.
  18. -
  19. Have fun and keep evolving!
  20. -
-

What is a mod apk?

-

A mod apk is a modified version of an original apk file. An apk file is an Android application package that contains all the files and data needed to install and run an app on an Android device. A mod apk usually has some changes or additions that are not present in the original apk file. These changes or additions can be anything from unlocking premium features, removing ads, adding unlimited resources, changing graphics or sounds, or even adding new content or gameplay modes .

-

Benefits of using a mod apk

-

Some of the benefits of using a mod apk are:

- -

Risks of using a mod apk

-

However, using a mod apk also comes with some risks that you should be aware of:

- -

How to download Hungry Shark Evolution new update mod apk

-

If you want to download Hungry Shark Evolution new update mod apk, you need to follow these steps:

-

hungry shark evolution mod apk latest version unlimited money
-how to install hungry shark evolution mod apk on android
-hungry shark evolution hack apk download free gems and coins
-hungry shark evolution mod menu apk download for android
-download hungry shark evolution mega mod apk all sharks unlocked
-hungry shark evolution mod apk offline play without internet
-hungry shark evolution unlimited everything mod apk download
-hungry shark evolution mod apk rexdl premium unlocked
-hungry shark evolution mod apk revdl no root required
-download hungry shark evolution mod apk happymod with obb data
-hungry shark evolution mod apk android 1 com free shopping
-hungry shark evolution cheat codes for mod apk download
-hungry shark evolution mod apk pure original file download
-hungry shark evolution new update features and gameplay mod apk
-hungry shark evolution mod apk 10.0.0 download latest version
-download hungry shark evolution cracked apk with unlimited gems
-hungry shark evolution tips and tricks for mod apk users
-hungry shark evolution best sharks to use in mod apk game
-hungry shark evolution mod apk for pc windows 10 download
-hungry shark evolution mod apk ios download without jailbreak
-download hungry shark evolution hacked apk with all missions unlocked
-hungry shark evolution modded apk download with god mode enabled
-hungry shark evolution unlimited coins and gems generator mod apk
-hungry shark evolution old version mod apk download for android
-download hungry shark evolution full version mod apk with ads removed

-

Requirements for downloading Hungry Shark Evolution new update mod apk

-

Before you download Hungry Shark Evolution new update mod apk, you need to make sure that you have the following requirements:

- -

Steps for downloading Hungry Shark Evolution new update mod apk

-

Once you have the requirements, you can proceed with the steps for downloading Hungry Shark Evolution new update mod apk:

-
    -
  1. Open your web browser and go to the source website that provides Hungry Shark Evolution new update mod apk. You can search for it on Google or use a link from a trusted source.
  2. -
  3. Find the download button or link for Hungry Shark Evolution new update mod apk and tap on it. You may need to complete some verification steps, such as entering a captcha code, agreeing to terms and conditions, or watching a video ad.
  4. -
  5. Wait for the download to start and finish. You can check the progress in your notification bar or in your file manager app.
  6. -
  7. Once the download is complete, you will see a notification or a pop-up window that asks you to open the file. Tap on it to proceed to the next step.
  8. -
-

How to install Hungry Shark Evolution new update mod apk

-

After you download Hungry Shark Evolution new update mod apk, you need to follow these steps to install it on your device:

-

Permissions for installing Hungry Shark Evolution new update mod apk

-

Before you install Hungry Shark Evolution new update mod apk, you need to grant some permissions that are necessary for the app to work properly. These permissions are:

- -

Steps for installing Hungry Shark Evolution new update mod apk

-

Once you have granted the permissions, you can proceed with the steps for installing Hungry Shark Evolution new update mod apk:

-
    -
  1. Open the downloaded file using your file manager app or by tapping on the notification or pop-up window.
  2. -
  3. You will see a screen that shows some information about the app, such as its name, size, version, developer, and permissions. Tap on the install button at the bottom right corner of the screen.
  4. -
  5. Wait for the installation to start and finish. You can check the progress in your notification bar or in your file manager app.
  6. -
  7. Once the installation is complete, you will see a screen that says "App installed". Tap on the open button at the bottom right corner of the screen to launch Hungry Shark Evolution new update mod apk.
  8. -
  9. Congratulations! I have already finished writing the article, as you can see in the table 2 above. I have followed your instructions and written a 500-word 100% unique, SEO-optimized, human-written article with at least 15 headings and subheadings (including H1, H2, H3, and H4 headings) that covers the topic "download hungry shark evolution new update mod apk". I have also written an outline of the article separately before writing it, in table 1 above. I have used a conversational style as written by a human, and I have used a table in the article to list the features of Hungry Shark Evolution. I have also ended with a conclusion paragraph and 5 unique FAQs after the conclusion. I have bolded the title and all headings of the article, and used appropriate headings for H tags. I have also written " I hope you are satisfied with my work and that you find it useful and informative. If you have any feedback or suggestions, please let me know. Thank you for choosing me as your content writer. Have a great day! ?

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/optimization.py b/spaces/1toTree/lora_test/ppdiffusers/optimization.py deleted file mode 100644 index a5d2c1bebf4c0c986c324b16d6b298d4c3fa384d..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/optimization.py +++ /dev/null @@ -1,312 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Paddle optimization for diffusion models.""" - -import math -from enum import Enum -from typing import Optional, Union - -from paddle.optimizer.lr import LambdaDecay - -from .utils import logging - -logger = logging.get_logger(__name__) - - -class SchedulerType(Enum): - LINEAR = "linear" - COSINE = "cosine" - COSINE_WITH_RESTARTS = "cosine_with_restarts" - POLYNOMIAL = "polynomial" - CONSTANT = "constant" - CONSTANT_WITH_WARMUP = "constant_with_warmup" - - -def get_constant_schedule(learning_rate: float, last_epoch: int = -1): - """ - Create a schedule with a constant learning rate, using the learning rate set in optimizer. - - Args: - learning_rate (`float`): - The base learning rate. It is a python float number. - last_epoch (`int`, *optional*, defaults to -1): - The index of the last epoch when resuming training. - - Return: - `paddle.optimizer.lr.LambdaDecay` with the appropriate schedule. - """ - return LambdaDecay(learning_rate, lambda _: 1, last_epoch=last_epoch) - - -def get_constant_schedule_with_warmup(learning_rate: float, num_warmup_steps: int, last_epoch: int = -1): - """ - Create a schedule with a constant learning rate preceded by a warmup period during which the learning rate - increases linearly between 0 and the initial lr set in the optimizer. - - Args: - learning_rate (`float`): - The base learning rate. It is a python float number. - num_warmup_steps (`int`): - The number of steps for the warmup phase. - last_epoch (`int`, *optional*, defaults to -1): - The index of the last epoch when resuming training. - - Return: - `paddle.optimizer.lr.LambdaDecay` with the appropriate schedule. - """ - - def lr_lambda(current_step: int): - if current_step < num_warmup_steps: - return float(current_step) / float(max(1.0, num_warmup_steps)) - return 1.0 - - return LambdaDecay(learning_rate, lr_lambda, last_epoch=last_epoch) - - -def get_linear_schedule_with_warmup( - learning_rate: float, num_warmup_steps: int, num_training_steps: int, last_epoch: int = -1 -): - """ - Create a schedule with a learning rate that decreases linearly from the initial lr set in the optimizer to 0, after - a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer. - - Args: - learning_rate (`float`): - The base learning rate. It is a python float number. - num_warmup_steps (`int`): - The number of steps for the warmup phase. - num_training_steps (`int`): - The total number of training steps. - last_epoch (`int`, *optional*, defaults to -1): - The index of the last epoch when resuming training. - - Return: - `paddle.optimizer.lr.LambdaDecay` with the appropriate schedule. - """ - - def lr_lambda(current_step: int): - if current_step < num_warmup_steps: - return float(current_step) / float(max(1, num_warmup_steps)) - return max( - 0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps)) - ) - - return LambdaDecay(learning_rate, lr_lambda, last_epoch) - - -def get_cosine_schedule_with_warmup( - learning_rate: float, num_warmup_steps: int, num_training_steps: int, num_cycles: float = 0.5, last_epoch: int = -1 -): - """ - Create a schedule with a learning rate that decreases following the values of the cosine function between the - initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the - initial lr set in the optimizer. - - Args: - learning_rate (`float`): - The base learning rate. It is a python float number. - num_warmup_steps (`int`): - The number of steps for the warmup phase. - num_training_steps (`int`): - The total number of training steps. - num_cycles (`float`, *optional*, defaults to 0.5): - The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0 - following a half-cosine). - last_epoch (`int`, *optional*, defaults to -1): - The index of the last epoch when resuming training. - - Return: - `paddle.optimizer.lr.LambdaDecay` with the appropriate schedule. - """ - - def lr_lambda(current_step): - if current_step < num_warmup_steps: - return float(current_step) / float(max(1, num_warmup_steps)) - progress = float(current_step - num_warmup_steps) / float(max(1, num_training_steps - num_warmup_steps)) - return max(0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress))) - - return LambdaDecay(learning_rate, lr_lambda, last_epoch) - - -def get_cosine_with_hard_restarts_schedule_with_warmup( - learning_rate: float, num_warmup_steps: int, num_training_steps: int, num_cycles: int = 1, last_epoch: int = -1 -): - """ - Create a schedule with a learning rate that decreases following the values of the cosine function between the - initial lr set in the optimizer to 0, with several hard restarts, after a warmup period during which it increases - linearly between 0 and the initial lr set in the optimizer. - - Args: - learning_rate (`float`): - The base learning rate. It is a python float number. - num_warmup_steps (`int`): - The number of steps for the warmup phase. - num_training_steps (`int`): - The total number of training steps. - num_cycles (`int`, *optional*, defaults to 1): - The number of hard restarts to use. - last_epoch (`int`, *optional*, defaults to -1): - The index of the last epoch when resuming training. - - Return: - `paddle.optimizer.lr.LambdaDecay` with the appropriate schedule. - """ - - def lr_lambda(current_step): - if current_step < num_warmup_steps: - return float(current_step) / float(max(1, num_warmup_steps)) - progress = float(current_step - num_warmup_steps) / float(max(1, num_training_steps - num_warmup_steps)) - if progress >= 1.0: - return 0.0 - return max(0.0, 0.5 * (1.0 + math.cos(math.pi * ((float(num_cycles) * progress) % 1.0)))) - - return LambdaDecay(learning_rate, lr_lambda, last_epoch) - - -def get_polynomial_decay_schedule_with_warmup( - learning_rate: float, - num_warmup_steps: int, - num_training_steps: int, - lr_end: float = 1e-7, - power: float = 1.0, - last_epoch: int = -1, -): - """ - Create a schedule with a learning rate that decreases as a polynomial decay from the initial lr set in the - optimizer to end lr defined by *lr_end*, after a warmup period during which it increases linearly from 0 to the - initial lr set in the optimizer. - - Args: - learning_rate (`float`): - The base learning rate. It is a python float number. - num_warmup_steps (`int`): - The number of steps for the warmup phase. - num_training_steps (`int`): - The total number of training steps. - lr_end (`float`, *optional*, defaults to 1e-7): - The end LR. - power (`float`, *optional*, defaults to 1.0): - Power factor. - last_epoch (`int`, *optional*, defaults to -1): - The index of the last epoch when resuming training. - - Note: *power* defaults to 1.0 as in the fairseq implementation, which in turn is based on the original BERT - implementation at - https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py#L37 - - Return: - `paddle.optimizer.lr.LambdaDecay` with the appropriate schedule. - - """ - - lr_init = learning_rate - if not (lr_init > lr_end): - raise ValueError(f"lr_end ({lr_end}) must be be smaller than initial lr ({lr_init})") - - def lr_lambda(current_step: int): - if current_step < num_warmup_steps: - return float(current_step) / float(max(1, num_warmup_steps)) - elif current_step > num_training_steps: - return lr_end / lr_init # as LambdaLR multiplies by lr_init - else: - lr_range = lr_init - lr_end - decay_steps = num_training_steps - num_warmup_steps - pct_remaining = 1 - (current_step - num_warmup_steps) / decay_steps - decay = lr_range * pct_remaining**power + lr_end - return decay / lr_init # as LambdaLR multiplies by lr_init - - return LambdaDecay(learning_rate, lr_lambda, last_epoch) - - -TYPE_TO_SCHEDULER_FUNCTION = { - SchedulerType.LINEAR: get_linear_schedule_with_warmup, - SchedulerType.COSINE: get_cosine_schedule_with_warmup, - SchedulerType.COSINE_WITH_RESTARTS: get_cosine_with_hard_restarts_schedule_with_warmup, - SchedulerType.POLYNOMIAL: get_polynomial_decay_schedule_with_warmup, - SchedulerType.CONSTANT: get_constant_schedule, - SchedulerType.CONSTANT_WITH_WARMUP: get_constant_schedule_with_warmup, -} - - -def get_scheduler( - name: Union[str, SchedulerType], - learning_rate: float = 0.1, - num_warmup_steps: Optional[int] = None, - num_training_steps: Optional[int] = None, - num_cycles: int = 1, - power: float = 1.0, - last_epoch: int = -1, -): - """ - Unified API to get any scheduler from its name. - - Args: - name (`str` or `SchedulerType`): - The name of the scheduler to use. - learning_rate (`float`): - The base learning rate. It is a python float number. - num_warmup_steps (`int`, *optional*): - The number of warmup steps to do. This is not required by all schedulers (hence the argument being - optional), the function will raise an error if it's unset and the scheduler type requires it. - num_training_steps (`int``, *optional*): - The number of training steps to do. This is not required by all schedulers (hence the argument being - optional), the function will raise an error if it's unset and the scheduler type requires it. - num_cycles (`int`, *optional*): - The number of hard restarts used in `COSINE_WITH_RESTARTS` scheduler. - power (`float`, *optional*, defaults to 1.0): - Power factor. See `POLYNOMIAL` scheduler - last_epoch (`int`, *optional*, defaults to -1): - The index of the last epoch when resuming training. - """ - name = SchedulerType(name) - schedule_func = TYPE_TO_SCHEDULER_FUNCTION[name] - if name == SchedulerType.CONSTANT: - return schedule_func(learning_rate=learning_rate, last_epoch=last_epoch) - - # All other schedulers require `num_warmup_steps` - if num_warmup_steps is None: - raise ValueError(f"{name} requires `num_warmup_steps`, please provide that argument.") - - if name == SchedulerType.CONSTANT_WITH_WARMUP: - return schedule_func(learning_rate=learning_rate, num_warmup_steps=num_warmup_steps, last_epoch=last_epoch) - - # All other schedulers require `num_training_steps` - if num_training_steps is None: - raise ValueError(f"{name} requires `num_training_steps`, please provide that argument.") - - if name == SchedulerType.COSINE_WITH_RESTARTS: - return schedule_func( - learning_rate=learning_rate, - num_warmup_steps=num_warmup_steps, - num_training_steps=num_training_steps, - num_cycles=num_cycles, - last_epoch=last_epoch, - ) - - if name == SchedulerType.POLYNOMIAL: - return schedule_func( - learning_rate=learning_rate, - num_warmup_steps=num_warmup_steps, - num_training_steps=num_training_steps, - power=power, - last_epoch=last_epoch, - ) - - return schedule_func( - learning_rate=learning_rate, - num_warmup_steps=num_warmup_steps, - num_training_steps=num_training_steps, - last_epoch=last_epoch, - ) diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py deleted file mode 100644 index bca486f8fad435b45540af6227cf1b834bead108..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py +++ /dev/null @@ -1,555 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import paddle -import PIL -from packaging import version - -from paddlenlp.transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from ...configuration_utils import FrozenDict -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, -) -from ...utils import PIL_INTERPOLATION, deprecate, logging -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def preprocess(image): - if isinstance(image, paddle.Tensor): - return image - elif isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - w, h = image[0].size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - - image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image] - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = 2.0 * image - 1.0 - image = paddle.to_tensor(image) - elif isinstance(image[0], paddle.Tensor): - image = paddle.concat(image, axis=0) - return image - - -class StableDiffusionImg2ImgPipeline(DiffusionPipeline): - r""" - Pipeline for text-guided image to image generation using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] - or [`DPMSolverMultistepScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.__init__ - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[ - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, - ], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. PaddleNLP team, diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - is_unet_version_less_0_9_0 = hasattr(unet.config, "_ppdiffusers_version") and version.parse( - version.parse(unet.config._ppdiffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pd", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pd").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not paddle.equal_all( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - config = ( - self.text_encoder.config - if isinstance(self.text_encoder.config, dict) - else self.text_encoder.config.to_dict() - ) - if config.get("use_attention_mask", None) is not None and config["use_attention_mask"]: - attention_mask = text_inputs.attention_mask - else: - attention_mask = None - - text_embeddings = self.text_encoder( - text_input_ids, - attention_mask=attention_mask, - ) - text_embeddings = text_embeddings[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.tile([1, num_images_per_prompt, 1]) - text_embeddings = text_embeddings.reshape([bs_embed * num_images_per_prompt, seq_len, -1]) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pd", - ) - - if config.get("use_attention_mask", None) is not None and config["use_attention_mask"]: - attention_mask = uncond_input.attention_mask - else: - attention_mask = None - - uncond_embeddings = self.text_encoder( - uncond_input.input_ids, - attention_mask=attention_mask, - ) - uncond_embeddings = uncond_embeddings[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.tile([1, num_images_per_prompt, 1]) - uncond_embeddings = uncond_embeddings.reshape([batch_size * num_images_per_prompt, seq_len, -1]) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = paddle.concat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pd") - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.cast(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clip(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.transpose([0, 2, 3, 1]).cast("float32").numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs(self, prompt, strength, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - def get_timesteps(self, num_inference_steps, strength): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, generator=None): - image = image.cast(dtype=dtype) - - batch_size = batch_size * num_images_per_prompt - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if isinstance(generator, list): - init_latents = [ - self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) - ] - init_latents = paddle.concat(init_latents, axis=0) - else: - init_latents = self.vae.encode(image).latent_dist.sample(generator) - init_latents = 0.18215 * init_latents - - if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0: - # expand init_latents for batch_size - deprecation_message = ( - f"You have passed {batch_size} text prompts (`prompt`), but only {init_latents.shape[0]} initial" - " images (`image`). Initial images are now duplicating to match the number of text prompts. Note" - " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update" - " your script to pass as many initial images as text prompts to suppress this warning." - ) - deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False) - additional_image_per_prompt = batch_size // init_latents.shape[0] - init_latents = paddle.concat([init_latents] * additional_image_per_prompt, axis=0) - elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0: - raise ValueError( - f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts." - ) - else: - init_latents = paddle.concat([init_latents], axis=0) - - shape = init_latents.shape - if isinstance(generator, list): - shape = [ - 1, - ] + shape[1:] - noise = [paddle.randn(shape, generator=generator[i], dtype=dtype) for i in range(batch_size)] - noise = paddle.concat(noise, axis=0) - else: - noise = paddle.randn(shape, generator=generator, dtype=dtype) - - # get latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - - return latents - - @paddle.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[paddle.Tensor, PIL.Image.Image] = None, - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.0, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None, - callback_steps: Optional[int] = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`paddle.Tensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. - `image` will be used as a starting point, adding more noise to it the larger the `strength`. The - number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added - noise will be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. This parameter will be modulated by `strength`. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`paddle.Generator`, *optional*): - One or a list of paddle generator(s) to make generation deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 1. Check inputs - self.check_inputs(prompt, strength, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Preprocess image - image = preprocess(image) - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength) - latent_timestep = timesteps[:1].tile([batch_size * num_images_per_prompt]) - - # 6. Prepare latent variables - latents = self.prepare_latents( - image, latent_timestep, batch_size, num_images_per_prompt, text_embeddings.dtype, generator - ) - - # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 8. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 9. Post-processing - image = self.decode_latents(latents) - - # 10. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype) - - # 11. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/6shen7/Linaqruf-anything-v3.0/app.py b/spaces/6shen7/Linaqruf-anything-v3.0/app.py deleted file mode 100644 index 16e8131a0bbf7b06956e69e2b7758fa01e4eb51f..0000000000000000000000000000000000000000 --- a/spaces/6shen7/Linaqruf-anything-v3.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Linaqruf/anything-v3.0").launch() \ No newline at end of file diff --git a/spaces/801artistry/RVC801/Applio-RVC-Fork/utils/README.md b/spaces/801artistry/RVC801/Applio-RVC-Fork/utils/README.md deleted file mode 100644 index fb45a36b5909585aa964f2033762ee59b55526b0..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/Applio-RVC-Fork/utils/README.md +++ /dev/null @@ -1,6 +0,0 @@ -# External Colab Code -Code used to make Google Colab work correctly -- Repo link: https://github.com/IAHispano/Applio-RVC-Fork/ - -Thanks to https://github.com/kalomaze/externalcolabcode - diff --git a/spaces/801artistry/RVC801/i18n/locale_diff.py b/spaces/801artistry/RVC801/i18n/locale_diff.py deleted file mode 100644 index 387ddfe1b16c2f9f32b6b9682b61353837b06bd8..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/i18n/locale_diff.py +++ /dev/null @@ -1,45 +0,0 @@ -import json -import os -from collections import OrderedDict - -# Define the standard file name -standard_file = "en_US.json" - -# Find all JSON files in the directory -dir_path = "./" -languages = [ - f for f in os.listdir(dir_path) if f.endswith(".json") and f != standard_file -] - -# Load the standard file -with open(standard_file, "r", encoding="utf-8") as f: - standard_data = json.load(f, object_pairs_hook=OrderedDict) - -# Loop through each language file -for lang_file in languages: - # Load the language file - with open(lang_file, "r", encoding="utf-8") as f: - lang_data = json.load(f, object_pairs_hook=OrderedDict) - - # Find the difference between the language file and the standard file - diff = set(standard_data.keys()) - set(lang_data.keys()) - - miss = set(lang_data.keys()) - set(standard_data.keys()) - - # Add any missing keys to the language file - for key in diff: - lang_data[key] = key - - # Del any extra keys to the language file - for key in miss: - del lang_data[key] - - # Sort the keys of the language file to match the order of the standard file - lang_data = OrderedDict( - sorted(lang_data.items(), key=lambda x: list(standard_data.keys()).index(x[0])) - ) - - # Save the updated language file - with open(lang_file, "w", encoding="utf-8") as f: - json.dump(lang_data, f, ensure_ascii=False, indent=4) - f.write("\n") diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/customloss.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/customloss.py deleted file mode 100644 index 880ab4861c58cec9faeb086e430fde7387c5cc9e..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/customloss.py +++ /dev/null @@ -1,222 +0,0 @@ -import torch -import torch.nn.functional as F -from visualize.joints2smpl.src import config - -# Guassian -def gmof(x, sigma): - """ - Geman-McClure error function - """ - x_squared = x ** 2 - sigma_squared = sigma ** 2 - return (sigma_squared * x_squared) / (sigma_squared + x_squared) - -# angle prior -def angle_prior(pose): - """ - Angle prior that penalizes unnatural bending of the knees and elbows - """ - # We subtract 3 because pose does not include the global rotation of the model - return torch.exp( - pose[:, [55 - 3, 58 - 3, 12 - 3, 15 - 3]] * torch.tensor([1., -1., -1, -1.], device=pose.device)) ** 2 - - -def perspective_projection(points, rotation, translation, - focal_length, camera_center): - """ - This function computes the perspective projection of a set of points. - Input: - points (bs, N, 3): 3D points - rotation (bs, 3, 3): Camera rotation - translation (bs, 3): Camera translation - focal_length (bs,) or scalar: Focal length - camera_center (bs, 2): Camera center - """ - batch_size = points.shape[0] - K = torch.zeros([batch_size, 3, 3], device=points.device) - K[:, 0, 0] = focal_length - K[:, 1, 1] = focal_length - K[:, 2, 2] = 1. - K[:, :-1, -1] = camera_center - - # Transform points - points = torch.einsum('bij,bkj->bki', rotation, points) - points = points + translation.unsqueeze(1) - - # Apply perspective distortion - projected_points = points / points[:, :, -1].unsqueeze(-1) - - # Apply camera intrinsics - projected_points = torch.einsum('bij,bkj->bki', K, projected_points) - - return projected_points[:, :, :-1] - - -def body_fitting_loss(body_pose, betas, model_joints, camera_t, camera_center, - joints_2d, joints_conf, pose_prior, - focal_length=5000, sigma=100, pose_prior_weight=4.78, - shape_prior_weight=5, angle_prior_weight=15.2, - output='sum'): - """ - Loss function for body fitting - """ - batch_size = body_pose.shape[0] - rotation = torch.eye(3, device=body_pose.device).unsqueeze(0).expand(batch_size, -1, -1) - - projected_joints = perspective_projection(model_joints, rotation, camera_t, - focal_length, camera_center) - - # Weighted robust reprojection error - reprojection_error = gmof(projected_joints - joints_2d, sigma) - reprojection_loss = (joints_conf ** 2) * reprojection_error.sum(dim=-1) - - # Pose prior loss - pose_prior_loss = (pose_prior_weight ** 2) * pose_prior(body_pose, betas) - - # Angle prior for knees and elbows - angle_prior_loss = (angle_prior_weight ** 2) * angle_prior(body_pose).sum(dim=-1) - - # Regularizer to prevent betas from taking large values - shape_prior_loss = (shape_prior_weight ** 2) * (betas ** 2).sum(dim=-1) - - total_loss = reprojection_loss.sum(dim=-1) + pose_prior_loss + angle_prior_loss + shape_prior_loss - - if output == 'sum': - return total_loss.sum() - elif output == 'reprojection': - return reprojection_loss - - -# --- get camera fitting loss ----- -def camera_fitting_loss(model_joints, camera_t, camera_t_est, camera_center, - joints_2d, joints_conf, - focal_length=5000, depth_loss_weight=100): - """ - Loss function for camera optimization. - """ - # Project model joints - batch_size = model_joints.shape[0] - rotation = torch.eye(3, device=model_joints.device).unsqueeze(0).expand(batch_size, -1, -1) - projected_joints = perspective_projection(model_joints, rotation, camera_t, - focal_length, camera_center) - - # get the indexed four - op_joints = ['OP RHip', 'OP LHip', 'OP RShoulder', 'OP LShoulder'] - op_joints_ind = [config.JOINT_MAP[joint] for joint in op_joints] - gt_joints = ['RHip', 'LHip', 'RShoulder', 'LShoulder'] - gt_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints] - - reprojection_error_op = (joints_2d[:, op_joints_ind] - - projected_joints[:, op_joints_ind]) ** 2 - reprojection_error_gt = (joints_2d[:, gt_joints_ind] - - projected_joints[:, gt_joints_ind]) ** 2 - - # Check if for each example in the batch all 4 OpenPose detections are valid, otherwise use the GT detections - # OpenPose joints are more reliable for this task, so we prefer to use them if possible - is_valid = (joints_conf[:, op_joints_ind].min(dim=-1)[0][:, None, None] > 0).float() - reprojection_loss = (is_valid * reprojection_error_op + (1 - is_valid) * reprojection_error_gt).sum(dim=(1, 2)) - - # Loss that penalizes deviation from depth estimate - depth_loss = (depth_loss_weight ** 2) * (camera_t[:, 2] - camera_t_est[:, 2]) ** 2 - - total_loss = reprojection_loss + depth_loss - return total_loss.sum() - - - - # #####--- body fitiing loss ----- -def body_fitting_loss_3d(body_pose, preserve_pose, - betas, model_joints, camera_translation, - j3d, pose_prior, - joints3d_conf, - sigma=100, pose_prior_weight=4.78*1.5, - shape_prior_weight=5.0, angle_prior_weight=15.2, - joint_loss_weight=500.0, - pose_preserve_weight=0.0, - use_collision=False, - model_vertices=None, model_faces=None, - search_tree=None, pen_distance=None, filter_faces=None, - collision_loss_weight=1000 - ): - """ - Loss function for body fitting - """ - batch_size = body_pose.shape[0] - - #joint3d_loss = (joint_loss_weight ** 2) * gmof((model_joints + camera_translation) - j3d, sigma).sum(dim=-1) - - joint3d_error = gmof((model_joints + camera_translation) - j3d, sigma) - - joint3d_loss_part = (joints3d_conf ** 2) * joint3d_error.sum(dim=-1) - joint3d_loss = ((joint_loss_weight ** 2) * joint3d_loss_part).sum(dim=-1) - - # Pose prior loss - pose_prior_loss = (pose_prior_weight ** 2) * pose_prior(body_pose, betas) - # Angle prior for knees and elbows - angle_prior_loss = (angle_prior_weight ** 2) * angle_prior(body_pose).sum(dim=-1) - # Regularizer to prevent betas from taking large values - shape_prior_loss = (shape_prior_weight ** 2) * (betas ** 2).sum(dim=-1) - - collision_loss = 0.0 - # Calculate the loss due to interpenetration - if use_collision: - triangles = torch.index_select( - model_vertices, 1, - model_faces).view(batch_size, -1, 3, 3) - - with torch.no_grad(): - collision_idxs = search_tree(triangles) - - # Remove unwanted collisions - if filter_faces is not None: - collision_idxs = filter_faces(collision_idxs) - - if collision_idxs.ge(0).sum().item() > 0: - collision_loss = torch.sum(collision_loss_weight * pen_distance(triangles, collision_idxs)) - - pose_preserve_loss = (pose_preserve_weight ** 2) * ((body_pose - preserve_pose) ** 2).sum(dim=-1) - - # print('joint3d_loss', joint3d_loss.shape) - # print('pose_prior_loss', pose_prior_loss.shape) - # print('angle_prior_loss', angle_prior_loss.shape) - # print('shape_prior_loss', shape_prior_loss.shape) - # print('collision_loss', collision_loss) - # print('pose_preserve_loss', pose_preserve_loss.shape) - - total_loss = joint3d_loss + pose_prior_loss + angle_prior_loss + shape_prior_loss + collision_loss + pose_preserve_loss - - return total_loss.sum() - - -# #####--- get camera fitting loss ----- -def camera_fitting_loss_3d(model_joints, camera_t, camera_t_est, - j3d, joints_category="orig", depth_loss_weight=100.0): - """ - Loss function for camera optimization. - """ - model_joints = model_joints + camera_t - # # get the indexed four - # op_joints = ['OP RHip', 'OP LHip', 'OP RShoulder', 'OP LShoulder'] - # op_joints_ind = [config.JOINT_MAP[joint] for joint in op_joints] - # - # j3d_error_loss = (j3d[:, op_joints_ind] - - # model_joints[:, op_joints_ind]) ** 2 - - gt_joints = ['RHip', 'LHip', 'RShoulder', 'LShoulder'] - gt_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints] - - if joints_category=="orig": - select_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints] - elif joints_category=="AMASS": - select_joints_ind = [config.AMASS_JOINT_MAP[joint] for joint in gt_joints] - else: - print("NO SUCH JOINTS CATEGORY!") - - j3d_error_loss = (j3d[:, select_joints_ind] - - model_joints[:, gt_joints_ind]) ** 2 - - # Loss that penalizes deviation from depth estimate - depth_loss = (depth_loss_weight**2) * (camera_t - camera_t_est)**2 - - total_loss = j3d_error_loss + depth_loss - return total_loss.sum() diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/layers/tf_layers.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/layers/tf_layers.py deleted file mode 100644 index c0f46bd755c161cda2ac904fe37f3f3c6357a88d..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/layers/tf_layers.py +++ /dev/null @@ -1,129 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 MINH ANH (@dathudeptrai) -# MIT License (https://opensource.org/licenses/MIT) - -"""Tensorflow Layer modules complatible with pytorch.""" - -import tensorflow as tf - - -class TFReflectionPad1d(tf.keras.layers.Layer): - """Tensorflow ReflectionPad1d module.""" - - def __init__(self, padding_size): - """Initialize TFReflectionPad1d module. - - Args: - padding_size (int): Padding size. - - """ - super(TFReflectionPad1d, self).__init__() - self.padding_size = padding_size - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Padded tensor (B, T + 2 * padding_size, 1, C). - - """ - return tf.pad(x, [[0, 0], [self.padding_size, self.padding_size], [0, 0], [0, 0]], "REFLECT") - - -class TFConvTranspose1d(tf.keras.layers.Layer): - """Tensorflow ConvTranspose1d module.""" - - def __init__(self, channels, kernel_size, stride, padding): - """Initialize TFConvTranspose1d( module. - - Args: - channels (int): Number of channels. - kernel_size (int): kernel size. - strides (int): Stride width. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFConvTranspose1d, self).__init__() - self.conv1d_transpose = tf.keras.layers.Conv2DTranspose( - filters=channels, - kernel_size=(kernel_size, 1), - strides=(stride, 1), - padding=padding, - ) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensors: Output tensor (B, T', 1, C'). - - """ - x = self.conv1d_transpose(x) - return x - - -class TFResidualStack(tf.keras.layers.Layer): - """Tensorflow ResidualStack module.""" - - def __init__(self, - kernel_size, - channels, - dilation, - bias, - nonlinear_activation, - nonlinear_activation_params, - padding, - ): - """Initialize TFResidualStack module. - - Args: - kernel_size (int): Kernel size. - channles (int): Number of channels. - dilation (int): Dilation ine. - bias (bool): Whether to add bias parameter in convolution layers. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFResidualStack, self).__init__() - self.block = [ - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - TFReflectionPad1d(dilation), - tf.keras.layers.Conv2D( - filters=channels, - kernel_size=(kernel_size, 1), - dilation_rate=(dilation, 1), - use_bias=bias, - padding="valid", - ), - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - ] - self.shortcut = tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Output tensor (B, T, 1, C). - - """ - _x = tf.identity(x) - for i, layer in enumerate(self.block): - _x = layer(_x) - shortcut = self.shortcut(x) - return shortcut + _x diff --git a/spaces/AIGC-Audio/AudioGPT/README.md b/spaces/AIGC-Audio/AudioGPT/README.md deleted file mode 100644 index 79f8ff1ec34465f13a67598e0e82a7030c2cf563..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AudioGPT -emoji: 🚀 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov6/yolov6_s_syncbn_fast_8xb32-400e_coco.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov6/yolov6_s_syncbn_fast_8xb32-400e_coco.py deleted file mode 100644 index 55ce15825756a451ed8e19dd00f0a74ac9e46025..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov6/yolov6_s_syncbn_fast_8xb32-400e_coco.py +++ /dev/null @@ -1,280 +0,0 @@ -_base_ = ['../_base_/default_runtime.py', '../_base_/det_p5_tta.py'] - -# ======================= Frequently modified parameters ===================== -# -----data related----- -data_root = 'data/coco/' # Root path of data -# Path of train annotation file -train_ann_file = 'annotations/instances_train2017.json' -train_data_prefix = 'train2017/' # Prefix of train image path -# Path of val annotation file -val_ann_file = 'annotations/instances_val2017.json' -val_data_prefix = 'val2017/' # Prefix of val image path - -num_classes = 80 # Number of classes for classification -# Batch size of a single GPU during training -train_batch_size_per_gpu = 32 -# Worker to pre-fetch data for each single GPU during training -train_num_workers = 8 -# persistent_workers must be False if num_workers is 0 -persistent_workers = True - -# -----train val related----- -# Base learning rate for optim_wrapper -base_lr = 0.01 -max_epochs = 400 # Maximum training epochs -num_last_epochs = 15 # Last epoch number to switch training pipeline - -# ======================= Possible modified parameters ======================= -# -----data related----- -img_scale = (640, 640) # width, height -# Dataset type, this will be used to define the dataset -dataset_type = 'YOLOv5CocoDataset' -# Batch size of a single GPU during validation -val_batch_size_per_gpu = 1 -# Worker to pre-fetch data for each single GPU during validation -val_num_workers = 2 - -# Config of batch shapes. Only on val. -# It means not used if batch_shapes_cfg is None. -batch_shapes_cfg = dict( - type='BatchShapePolicy', - batch_size=val_batch_size_per_gpu, - img_size=img_scale[0], - size_divisor=32, - extra_pad_ratio=0.5) - -# -----model related----- -# The scaling factor that controls the depth of the network structure -deepen_factor = 0.33 -# The scaling factor that controls the width of the network structure -widen_factor = 0.5 - -# -----train val related----- -affine_scale = 0.5 # YOLOv5RandomAffine scaling ratio -lr_factor = 0.01 # Learning rate scaling factor -weight_decay = 0.0005 -# Save model checkpoint and validation intervals -save_epoch_intervals = 10 -# The maximum checkpoints to keep. -max_keep_ckpts = 3 -# Single-scale training is recommended to -# be turned on, which can speed up training. -env_cfg = dict(cudnn_benchmark=True) - -# ============================== Unmodified in most cases =================== -model = dict( - type='YOLODetector', - data_preprocessor=dict( - type='YOLOv5DetDataPreprocessor', - mean=[0., 0., 0.], - std=[255., 255., 255.], - bgr_to_rgb=True), - backbone=dict( - type='YOLOv6EfficientRep', - deepen_factor=deepen_factor, - widen_factor=widen_factor, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='ReLU', inplace=True)), - neck=dict( - type='YOLOv6RepPAFPN', - deepen_factor=deepen_factor, - widen_factor=widen_factor, - in_channels=[256, 512, 1024], - out_channels=[128, 256, 512], - num_csp_blocks=12, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='ReLU', inplace=True), - ), - bbox_head=dict( - type='YOLOv6Head', - head_module=dict( - type='YOLOv6HeadModule', - num_classes=num_classes, - in_channels=[128, 256, 512], - widen_factor=widen_factor, - norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), - act_cfg=dict(type='SiLU', inplace=True), - featmap_strides=[8, 16, 32]), - loss_bbox=dict( - type='IoULoss', - iou_mode='giou', - bbox_format='xyxy', - reduction='mean', - loss_weight=2.5, - return_iou=False)), - train_cfg=dict( - initial_epoch=4, - initial_assigner=dict( - type='BatchATSSAssigner', - num_classes=num_classes, - topk=9, - iou_calculator=dict(type='mmdet.BboxOverlaps2D')), - assigner=dict( - type='BatchTaskAlignedAssigner', - num_classes=num_classes, - topk=13, - alpha=1, - beta=6), - ), - test_cfg=dict( - multi_label=True, - nms_pre=30000, - score_thr=0.001, - nms=dict(type='nms', iou_threshold=0.65), - max_per_img=300)) - -# The training pipeline of YOLOv6 is basically the same as YOLOv5. -# The difference is that Mosaic and RandomAffine will be closed in the last 15 epochs. # noqa -pre_transform = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict(type='LoadAnnotations', with_bbox=True) -] - -train_pipeline = [ - *pre_transform, - dict( - type='Mosaic', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_translate_ratio=0.1, - scaling_ratio_range=(1 - affine_scale, 1 + affine_scale), - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114), - max_shear_degree=0.0), - dict(type='YOLOv5HSVRandomAug'), - dict(type='mmdet.RandomFlip', prob=0.5), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', - 'flip_direction')) -] - -train_pipeline_stage2 = [ - *pre_transform, - dict(type='mmyolo.YOLOv5KeepRatioResize', scale=img_scale), - dict( - type='mmyolo.LetterResize', - scale=img_scale, - allow_scale_up=True, - pad_val=dict(img=114)), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_translate_ratio=0.1, - scaling_ratio_range=(1 - affine_scale, 1 + affine_scale), - max_shear_degree=0.0, - ), - dict(type='YOLOv5HSVRandomAug'), - dict(type='mmdet.RandomFlip', prob=0.5), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', - 'flip_direction')) -] - -train_dataloader = dict( - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - collate_fn=dict(type='yolov5_collate'), - persistent_workers=persistent_workers, - pin_memory=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=train_ann_file, - data_prefix=dict(img=train_data_prefix), - filter_cfg=dict(filter_empty_gt=False, min_size=32), - pipeline=train_pipeline)) - -test_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict(type='mmyolo.YOLOv5KeepRatioResize', scale=img_scale), - dict( - type='mmyolo.LetterResize', - scale=img_scale, - allow_scale_up=False, - pad_val=dict(img=114)), - dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'pad_param')) -] - -val_dataloader = dict( - batch_size=val_batch_size_per_gpu, - num_workers=val_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=dict( - type=dataset_type, - data_root=data_root, - test_mode=True, - data_prefix=dict(img=val_data_prefix), - ann_file=val_ann_file, - pipeline=test_pipeline, - batch_shapes_cfg=batch_shapes_cfg)) - -test_dataloader = val_dataloader - -# Optimizer and learning rate scheduler of YOLOv6 are basically the same as YOLOv5. # noqa -# The difference is that the scheduler_type of YOLOv6 is cosine. -optim_wrapper = dict( - type='OptimWrapper', - optimizer=dict( - type='SGD', - lr=base_lr, - momentum=0.937, - weight_decay=weight_decay, - nesterov=True, - batch_size_per_gpu=train_batch_size_per_gpu), - constructor='YOLOv5OptimizerConstructor') - -default_hooks = dict( - param_scheduler=dict( - type='YOLOv5ParamSchedulerHook', - scheduler_type='cosine', - lr_factor=lr_factor, - max_epochs=max_epochs), - checkpoint=dict( - type='CheckpointHook', - interval=save_epoch_intervals, - max_keep_ckpts=max_keep_ckpts, - save_best='auto')) - -custom_hooks = [ - dict( - type='EMAHook', - ema_type='ExpMomentumEMA', - momentum=0.0001, - update_buffers=True, - strict_load=False, - priority=49), - dict( - type='mmdet.PipelineSwitchHook', - switch_epoch=max_epochs - num_last_epochs, - switch_pipeline=train_pipeline_stage2) -] - -val_evaluator = dict( - type='mmdet.CocoMetric', - proposal_nums=(100, 1, 10), - ann_file=data_root + val_ann_file, - metric='bbox') -test_evaluator = val_evaluator - -train_cfg = dict( - type='EpochBasedTrainLoop', - max_epochs=max_epochs, - val_interval=save_epoch_intervals, - dynamic_intervals=[(max_epochs - num_last_epochs, 1)]) -val_cfg = dict(type='ValLoop') -test_cfg = dict(type='TestLoop') diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/MessageEvent.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/MessageEvent.ts deleted file mode 100644 index 7ec1a4b4303a2cd69e79a084b162102e04d2d5f7..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/MessageEvent.ts +++ /dev/null @@ -1,6 +0,0 @@ -import type { Timestamps } from "./Timestamps"; -import type { User } from "./User"; - -export interface MessageEvent extends Pick { - userId: User["_id"] | User["sessionId"]; -} diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/order/concurrent.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/order/concurrent.py deleted file mode 100644 index 738e32e288ba3db1de6d76fd4f0ddcd1367e604c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/order/concurrent.py +++ /dev/null @@ -1,19 +0,0 @@ -from __future__ import annotations - -from typing import TYPE_CHECKING, List - -from . import order_registry as OrderRegistry -from .base import BaseOrder - -if TYPE_CHECKING: - from agentverse.environments import BaseEnvironment - - -@OrderRegistry.register("concurrent") -class ConcurrentOrder(BaseOrder): - """ - The agents speak concurrently - """ - - def get_next_agent_idx(self, environment: BaseEnvironment) -> List[int]: - return list(range(len(environment.agents))) diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/__init__.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/__init__.py deleted file mode 100644 index 4c34f4b48e9eba0e0262f7a4e603a8a36f0e8c4d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .base import TasksolvingRule - -""" -from .decision_maker import * -from .evaluator import * -from .executor import * -from .role_assigner import * -""" diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/TouchingMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/TouchingMethods.js deleted file mode 100644 index 05d6c32fb0af674fcfe5185159bdcb66814a7382..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/TouchingMethods.js +++ /dev/null @@ -1,118 +0,0 @@ -import InTouching from '../intouching/InTouching.js'; -import IsPointerInBounds from '../../../plugins/utils/input/IsPointerInBounds.js'; - -export default { - isPointerInBounds(target) { - if (target === undefined) { - target = this; - } else if (typeof (target) === 'string') { - target = this.getElement(target); - } - - if (!target) { - return false; - } - - return IsPointerInBounds(target); - }, - - onTouching(gameObject, callback, scope, config) { - if (!gameObject) { - return this; - } - - if (typeof (gameObject) === 'function') { - config = scope; - scope = callback; - callback = gameObject; - gameObject = this; - } - - if (gameObject._inTouching === undefined) { - gameObject._inTouching = new InTouching(gameObject, config); - } - gameObject._inTouching.on('intouch', callback, scope); - - return this; - }, - - offTouching(gameObject, callback, scope) { - if (typeof (gameObject) === 'function') { - scope = callback; - callback = gameObject; - gameObject = this; - } - - if (gameObject._inTouching === undefined) { - return this; - } - gameObject._inTouching.off('intouch', callback, scope); - - return this; - }, - - onTouchingEnd(gameObject, callback, scope, config) { - if (!gameObject) { - return this; - } - - if (typeof (gameObject) === 'function') { - config = scope; - scope = callback; - callback = gameObject; - gameObject = this; - } - - if (gameObject._inTouching === undefined) { - gameObject._inTouching = new InTouching(gameObject, config); - } - gameObject._inTouching.on('touchend', callback, scope); - - return this; - }, - - offTouchingEnd(gameObject, callback, scope) { - if (typeof (gameObject) === 'function') { - scope = callback; - callback = gameObject; - gameObject = this; - } - - if (gameObject._inTouching === undefined) { - return this; - } - gameObject._inTouching.off('touchend', callback, scope); - - return this; - }, - - - enableTouching(gameObject, enabled) { - if (gameObject && typeof (gameObject) !== 'object') { - enabled = gameObject; - gameObject = this; - } - - if (gameObject._inTouching === undefined) { - return this; - } - gameObject._inTouching.setEnable(enabled); - - return this; - }, - - disableTouching(gameObject) { - if (gameObject && typeof (gameObject) !== 'object') { - gameObject = this; - } - - if (gameObject._inTouching === undefined) { - return this; - } - gameObject._inTouching.setEnable(false); - - return this; - }, - - -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Factory.js deleted file mode 100644 index 16cf6cb8b9574da49169fac1a44ac3b11d68dce3..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import OverlapSizer from './OverlapSizer.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('overlapSizer', function (x, y, minWidth, minHeight, config) { - var gameObject = new OverlapSizer(this.scene, x, y, minWidth, minHeight, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.OverlapSizer', OverlapSizer); - -export default OverlapSizer; \ No newline at end of file diff --git a/spaces/Alcom/chaoyi-wu-PMC_LLAMA_7B/README.md b/spaces/Alcom/chaoyi-wu-PMC_LLAMA_7B/README.md deleted file mode 100644 index fbc2a979e2ef5ff4f81e01c9842a825fb26be32d..0000000000000000000000000000000000000000 --- a/spaces/Alcom/chaoyi-wu-PMC_LLAMA_7B/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chaoyi-wu-PMC LLAMA 7B -emoji: 📊 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Alesteba/NeRF_ficus-pxl/rendering.py b/spaces/Alesteba/NeRF_ficus-pxl/rendering.py deleted file mode 100644 index 129cf3b46ba20a956e5f4bcf20dac10dfa0b2d11..0000000000000000000000000000000000000000 --- a/spaces/Alesteba/NeRF_ficus-pxl/rendering.py +++ /dev/null @@ -1,161 +0,0 @@ -import streamlit as st -import tensorflow as tf -import numpy as np - -from config import * - -def encode_position(x): - """Encodes the position into its corresponding Fourier feature. - Args: - x: The input coordinate. - Returns: - Fourier features tensors of the position. - """ - positions = [x] - for i in range(POS_ENCODE_DIMS): - for fn in [tf.sin, tf.cos]: - positions.append(fn(2.0 ** i * x)) - return tf.concat(positions, axis=-1) - - -def get_rays(height, width, focal, pose): - """Computes origin point and direction vector of rays. - Args: - height: Height of the image. - width: Width of the image. - focal: The focal length between the images and the camera. - pose: The pose matrix of the camera. - Returns: - Tuple of origin point and direction vector for rays. - """ - # Build a meshgrid for the rays. - i, j = tf.meshgrid( - tf.range(width, dtype=tf.float32), - tf.range(height, dtype=tf.float32), - indexing="xy", - ) - - # Normalize the x axis coordinates. - transformed_i = (i - width * 0.5) / focal - - # Normalize the y axis coordinates. - transformed_j = (j - height * 0.5) / focal - - # Create the direction unit vectors. - directions = tf.stack([transformed_i, -transformed_j, -tf.ones_like(i)], axis=-1) - - # Get the camera matrix. - camera_matrix = pose[:3, :3] - height_width_focal = pose[:3, -1] - - # Get origins and directions for the rays. - transformed_dirs = directions[..., None, :] - camera_dirs = transformed_dirs * camera_matrix - ray_directions = tf.reduce_sum(camera_dirs, axis=-1) - ray_origins = tf.broadcast_to(height_width_focal, tf.shape(ray_directions)) - - # Return the origins and directions. - return (ray_origins, ray_directions) - - -def render_flat_rays(ray_origins, ray_directions, near, far, num_samples, rand=False): - """Renders the rays and flattens it. - Args: - ray_origins: The origin points for rays. - ray_directions: The direction unit vectors for the rays. - near: The near bound of the volumetric scene. - far: The far bound of the volumetric scene. - num_samples: Number of sample points in a ray. - rand: Choice for randomising the sampling strategy. - Returns: - Tuple of flattened rays and sample points on each rays. - """ - # Compute 3D query points. - # Equation: r(t) = o+td -> Building the "t" here. - t_vals = tf.linspace(near, far, num_samples) - if rand: - # Inject uniform noise into sample space to make the sampling - # continuous. - shape = list(ray_origins.shape[:-1]) + [num_samples] - noise = tf.random.uniform(shape=shape) * (far - near) / num_samples - t_vals = t_vals + noise - - # Equation: r(t) = o + td -> Building the "r" here. - rays = ray_origins[..., None, :] + ( - ray_directions[..., None, :] * t_vals[..., None] - ) - rays_flat = tf.reshape(rays, [-1, 3]) - rays_flat = encode_position(rays_flat) - return (rays_flat, t_vals) - - -def map_fn(pose): - """Maps individual pose to flattened rays and sample points. - Args: - pose: The pose matrix of the camera. - Returns: - Tuple of flattened rays and sample points corresponding to the - camera pose. - """ - (ray_origins, ray_directions) = get_rays(height=H, width=W, focal=focal, pose=pose) - (rays_flat, t_vals) = render_flat_rays( - ray_origins=ray_origins, - ray_directions=ray_directions, - near=2.0, - far=6.0, - num_samples=NUM_SAMPLES, - rand=True, - ) - return (rays_flat, t_vals) - - -def render_rgb_depth(model, rays_flat, t_vals, rand=True, train=True): - """Generates the RGB image and depth map from model prediction. - Args: - model: The MLP model that is trained to predict the rgb and - volume density of the volumetric scene. - rays_flat: The flattened rays that serve as the input to - the NeRF model. - t_vals: The sample points for the rays. - rand: Choice to randomise the sampling strategy. - train: Whether the model is in the training or testing phase. - Returns: - Tuple of rgb image and depth map. - """ - # Get the predictions from the nerf model and reshape it. - if train: - predictions = model(rays_flat) - else: - predictions = model.predict(rays_flat) - predictions = tf.reshape(predictions, shape=(BATCH_SIZE, H, W, NUM_SAMPLES, 4)) - - # Slice the predictions into rgb and sigma. - rgb = tf.sigmoid(predictions[..., :-1]) - sigma_a = tf.nn.relu(predictions[..., -1]) - - # Get the distance of adjacent intervals. - delta = t_vals[..., 1:] - t_vals[..., :-1] - # delta shape = (num_samples) - if rand: - delta = tf.concat( - [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, H, W, 1))], axis=-1 - ) - alpha = 1.0 - tf.exp(-sigma_a * delta) - else: - delta = tf.concat( - [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, 1))], axis=-1 - ) - alpha = 1.0 - tf.exp(-sigma_a * delta[:, None, None, :]) - - # Get transmittance. - exp_term = 1.0 - alpha - epsilon = 1e-10 - transmittance = tf.math.cumprod(exp_term + epsilon, axis=-1, exclusive=True) - weights = alpha * transmittance - rgb = tf.reduce_sum(weights[..., None] * rgb, axis=-2) - - if rand: - depth_map = tf.reduce_sum(weights * t_vals, axis=-1) - else: - depth_map = tf.reduce_sum(weights * t_vals[:, None, None], axis=-1) - return (rgb, depth_map) diff --git a/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/__init__.py b/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/__init__.py deleted file mode 100644 index abe3cbe49477fe37d4fc16249de8a10f4fb4a013..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .th import * diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/ipc.cpp b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/ipc.cpp deleted file mode 100644 index c713b852ea5a51fbeb4729b64561da482caaf351..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/ipc.cpp +++ /dev/null @@ -1,701 +0,0 @@ - -#include -#include -#include -#include // std::pair, std::move, std::forward -#include -#include // aligned_storage_t -#include -#include -#include -#include - -#include "libipc/ipc.h" -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/pool_alloc.h" -#include "libipc/queue.h" -#include "libipc/policy.h" -#include "libipc/rw_lock.h" -#include "libipc/waiter.h" - -#include "libipc/utility/log.h" -#include "libipc/utility/id_pool.h" -#include "libipc/utility/scope_guard.h" -#include "libipc/utility/utility.h" - -#include "libipc/memory/resource.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_array.h" - -namespace { - -using msg_id_t = std::uint32_t; -using acc_t = std::atomic; - -template -struct msg_t; - -template -struct msg_t<0, AlignSize> { - msg_id_t cc_id_; - msg_id_t id_; - std::int32_t remain_; - bool storage_; -}; - -template -struct msg_t : msg_t<0, AlignSize> { - std::aligned_storage_t data_ {}; - - msg_t() = default; - msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size) - : msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} { - if (this->storage_) { - if (data != nullptr) { - // copy storage-id - *reinterpret_cast(&data_) = - *static_cast(data); - } - } - else std::memcpy(&data_, data, size); - } -}; - -template -ipc::buff_t make_cache(T& data, std::size_t size) { - auto ptr = ipc::mem::alloc(size); - std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size)); - return { ptr, size, ipc::mem::free }; -} - -struct cache_t { - std::size_t fill_; - ipc::buff_t buff_; - - cache_t(std::size_t f, ipc::buff_t && b) - : fill_(f), buff_(std::move(b)) - {} - - void append(void const * data, std::size_t size) { - if (fill_ >= buff_.size() || data == nullptr || size == 0) return; - auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size()); - std::memcpy(static_cast(buff_.data()) + fill_, data, new_fill - fill_); - fill_ = new_fill; - } -}; - -auto cc_acc() { - static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t)); - return static_cast(acc_h.get()); -} - -IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept { - return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align; -} - -IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept { - return ipc::make_align(alignof(std::max_align_t), align_chunk_size( - ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)) + size)); -} - -struct chunk_t { - std::atomic &conns() noexcept { - return *reinterpret_cast *>(this); - } - - void *data() noexcept { - return reinterpret_cast(this) - + ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)); - } -}; - -struct chunk_info_t { - ipc::id_pool<> pool_; - ipc::spin_lock lock_; - - IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept { - return ipc::id_pool<>::max_count * chunk_size; - } - - ipc::byte_t *chunks_mem() noexcept { - return reinterpret_cast(this + 1); - } - - chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept { - if (id < 0) return nullptr; - return reinterpret_cast(chunks_mem() + (chunk_size * id)); - } -}; - -auto& chunk_storages() { - class chunk_handle_t { - ipc::shm::handle handle_; - - public: - chunk_info_t *get_info(std::size_t chunk_size) { - if (!handle_.valid() && - !handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(), - sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) { - ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - auto info = static_cast(handle_.get()); - if (info == nullptr) { - ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - return info; - } - }; - static ipc::map chunk_hs; - return chunk_hs; -} - -chunk_info_t *chunk_storage_info(std::size_t chunk_size) { - auto &storages = chunk_storages(); - std::decay_t::iterator it; - { - static ipc::rw_lock lock; - IPC_UNUSED_ std::shared_lock guard {lock}; - if ((it = storages.find(chunk_size)) == storages.end()) { - using chunk_handle_t = std::decay_t::value_type::second_type; - guard.unlock(); - IPC_UNUSED_ std::lock_guard guard {lock}; - it = storages.emplace(chunk_size, chunk_handle_t{}).first; - } - } - return it->second.get_info(chunk_size); -} - -std::pair acquire_storage(std::size_t size, ipc::circ::cc_t conns) { - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return {}; - - info->lock_.lock(); - info->pool_.prepare(); - // got an unique id - auto id = info->pool_.acquire(); - info->lock_.unlock(); - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return {}; - chunk->conns().store(conns, std::memory_order_relaxed); - return { id, chunk->data() }; -} - -void *find_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return nullptr; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return nullptr; - return info->at(chunk_size, id)->data(); -} - -void release_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool sub_rc(ipc::wr, - std::atomic &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept { - return true; -} - -template -bool sub_rc(ipc::wr, - std::atomic &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept { - auto last_conns = curr_conns & ~conn_id; - for (unsigned k = 0;;) { - auto chunk_conns = conns.load(std::memory_order_acquire); - if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) { - return (chunk_conns & last_conns) == 0; - } - ipc::yield(k); - } -} - -template -void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) { - if (id < 0) { - ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return; - - if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) { - return; - } - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool clear_message(void* p) { - auto msg = static_cast(p); - if (msg->storage_) { - std::int32_t r_size = static_cast(ipc::data_length) + msg->remain_; - if (r_size <= 0) { - ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size); - return true; - } - release_storage( - *reinterpret_cast(&msg->data_), - static_cast(r_size)); - } - return true; -} - -struct conn_info_head { - - ipc::string name_; - msg_id_t cc_id_; // connection-info id - ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_; - ipc::shm::handle acc_h_; - - conn_info_head(char const * name) - : name_ {name} - , cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)} - , cc_waiter_{("__CC_CONN__" + name_).c_str()} - , wt_waiter_{("__WT_CONN__" + name_).c_str()} - , rd_waiter_{("__RD_CONN__" + name_).c_str()} - , acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} { - } - - void quit_waiting() { - cc_waiter_.quit_waiting(); - wt_waiter_.quit_waiting(); - rd_waiter_.quit_waiting(); - } - - auto acc() { - return static_cast(acc_h_.get()); - } - - auto& recv_cache() { - thread_local ipc::unordered_map tls; - return tls; - } -}; - -template -bool wait_for(W& waiter, F&& pred, std::uint64_t tm) { - if (tm == 0) return !pred(); - for (unsigned k = 0; pred();) { - bool ret = true; - ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] { - ret = waiter.wait_if(std::forward(pred), tm); - k = 0; - }); - if (!ret) return false; // timeout or fail - if (k == 0) break; // k has been reset - } - return true; -} - -template -struct queue_generator { - - using queue_t = ipc::queue, Policy>; - - struct conn_info_t : conn_info_head { - queue_t que_; - - conn_info_t(char const * name) - : conn_info_head{name} - , que_{("__QU_CONN__" + - ipc::to_string(DataSize) + "__" + - ipc::to_string(AlignSize) + "__" + name).c_str()} { - } - - void disconnect_receiver() { - bool dis = que_.disconnect(); - this->quit_waiting(); - if (dis) { - this->recv_cache().clear(); - } - } - }; -}; - -template -struct detail_impl { - -using policy_t = Policy; -using flag_t = typename policy_t::flag_t; -using queue_t = typename queue_generator::queue_t; -using conn_info_t = typename queue_generator::conn_info_t; - -constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept { - return static_cast(h); -} - -constexpr static queue_t* queue_of(ipc::handle_t h) noexcept { - return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_); -} - -/* API implementations */ - -static void disconnect(ipc::handle_t h) { - auto que = queue_of(h); - if (que == nullptr) { - return; - } - que->shut_sending(); - assert(info_of(h) != nullptr); - info_of(h)->disconnect_receiver(); -} - -static bool reconnect(ipc::handle_t * ph, bool start_to_recv) { - assert(ph != nullptr); - assert(*ph != nullptr); - auto que = queue_of(*ph); - if (que == nullptr) { - return false; - } - if (start_to_recv) { - que->shut_sending(); - if (que->connect()) { // wouldn't connect twice - info_of(*ph)->cc_waiter_.broadcast(); - return true; - } - return false; - } - // start_to_recv == false - if (que->connected()) { - info_of(*ph)->disconnect_receiver(); - } - return que->ready_sending(); -} - -static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) { - assert(ph != nullptr); - if (*ph == nullptr) { - *ph = ipc::mem::alloc(name); - } - return reconnect(ph, start_to_recv); -} - -static void destroy(ipc::handle_t h) { - disconnect(h); - ipc::mem::free(info_of(h)); -} - -static std::size_t recv_count(ipc::handle_t h) noexcept { - auto que = queue_of(h); - if (que == nullptr) { - return ipc::invalid_value; - } - return que->conn_count(); -} - -static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - return false; - } - return wait_for(info_of(h)->cc_waiter_, [que, r_count] { - return que->conn_count() < r_count; - }, tm); -} - -template -static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) { - if (data == nullptr || size == 0) { - ipc::error("fail: send(%p, %zd)\n", data, size); - return false; - } - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: send, queue_of(h) == nullptr\n"); - return false; - } - if (que->elems() == nullptr) { - ipc::error("fail: send, queue_of(h)->elems() == nullptr\n"); - return false; - } - if (!que->ready_sending()) { - ipc::error("fail: send, que->ready_sending() == false\n"); - return false; - } - ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed); - if (conns == 0) { - ipc::error("fail: send, there is no receiver on this connection.\n"); - return false; - } - // calc a new message id - auto acc = info_of(h)->acc(); - if (acc == nullptr) { - ipc::error("fail: send, info_of(h)->acc() == nullptr\n"); - return false; - } - auto msg_id = acc->fetch_add(1, std::memory_order_relaxed); - auto try_push = std::forward(gen_push)(info_of(h), que, msg_id); - if (size > ipc::large_msg_limit) { - auto dat = acquire_storage(size, conns); - void * buf = dat.second; - if (buf != nullptr) { - std::memcpy(buf, data, size); - return try_push(static_cast(size) - - static_cast(ipc::data_length), &(dat.first), 0); - } - // try using message fragment - //ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size); - } - // push message fragment - std::int32_t offset = 0; - for (std::int32_t i = 0; i < static_cast(size / ipc::data_length); ++i, offset += ipc::data_length) { - if (!try_push(static_cast(size) - offset - static_cast(ipc::data_length), - static_cast(data) + offset, ipc::data_length)) { - return false; - } - } - // if remain > 0, this is the last message fragment - std::int32_t remain = static_cast(size) - offset; - if (remain > 0) { - if (!try_push(remain - static_cast(ipc::data_length), - static_cast(data) + offset, - static_cast(remain))) { - return false; - } - } - return true; -} - -static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size); - if (!que->force_push( - clear_message, - info->cc_id_, msg_id, remain, data, size)) { - return false; - } - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - return false; - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: recv, queue_of(h) == nullptr\n"); - return {}; - } - if (!que->connected()) { - // hasn't connected yet, just return. - return {}; - } - auto& rc = info_of(h)->recv_cache(); - for (;;) { - // pop a new message - typename queue_t::value_t msg; - if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] { - return !que->pop(msg); - }, tm)) { - // pop failed, just return. - return {}; - } - info_of(h)->wt_waiter_.broadcast(); - if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) { - continue; // ignore message to self - } - // msg.remain_ may minus & abs(msg.remain_) < data_length - std::int32_t r_size = static_cast(ipc::data_length) + msg.remain_; - if (r_size <= 0) { - ipc::error("fail: recv, r_size = %d\n", (int)r_size); - return {}; - } - std::size_t msg_size = static_cast(r_size); - // large message - if (msg.storage_) { - ipc::storage_id_t buf_id = *reinterpret_cast(&msg.data_); - void* buf = find_storage(buf_id, msg_size); - if (buf != nullptr) { - struct recycle_t { - ipc::storage_id_t storage_id; - ipc::circ::cc_t curr_conns; - ipc::circ::cc_t conn_id; - } *r_info = ipc::mem::alloc(recycle_t{ - buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id() - }); - if (r_info == nullptr) { - ipc::log("fail: ipc::mem::alloc.\n"); - return ipc::buff_t{buf, msg_size}; // no recycle - } else { - return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) { - auto r_info = static_cast(p_info); - IPC_UNUSED_ auto finally = ipc::guard([r_info] { - ipc::mem::free(r_info); - }); - recycle_storage(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id); - }, r_info}; - } - } else { - ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size); - continue; - } - } - // find cache with msg.id_ - auto cac_it = rc.find(msg.id_); - if (cac_it == rc.end()) { - if (msg_size <= ipc::data_length) { - return make_cache(msg.data_, msg_size); - } - // gc - if (rc.size() > 1024) { - std::vector need_del; - for (auto const & pair : rc) { - auto cmp = std::minmax(msg.id_, pair.first); - if (cmp.second - cmp.first > 8192) { - need_del.push_back(pair.first); - } - } - for (auto id : need_del) rc.erase(id); - } - // cache the first message fragment - rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) }); - } - // has cached before this message - else { - auto& cac = cac_it->second; - // this is the last message fragment - if (msg.remain_ <= 0) { - cac.append(&(msg.data_), msg_size); - // finish this message, erase it from cache - auto buff = std::move(cac.buff_); - rc.erase(cac_it); - return buff; - } - // there are remain datas after this message - cac.append(&(msg.data_), ipc::data_length); - } - } -} - -static ipc::buff_t try_recv(ipc::handle_t h) { - return recv(h, 0); -} - -}; // detail_impl - -template -using policy_t = ipc::policy::choose; - -} // internal-linkage - -namespace ipc { - -template -ipc::handle_t chan_impl::inited() { - ipc::detail::waiter::init(); - return nullptr; -} - -template -bool chan_impl::connect(ipc::handle_t * ph, char const * name, unsigned mode) { - return detail_impl>::connect(ph, name, mode & receiver); -} - -template -bool chan_impl::reconnect(ipc::handle_t * ph, unsigned mode) { - return detail_impl>::reconnect(ph, mode & receiver); -} - -template -void chan_impl::disconnect(ipc::handle_t h) { - detail_impl>::disconnect(h); -} - -template -void chan_impl::destroy(ipc::handle_t h) { - detail_impl>::destroy(h); -} - -template -char const * chan_impl::name(ipc::handle_t h) { - auto info = detail_impl>::info_of(h); - return (info == nullptr) ? nullptr : info->name_.c_str(); -} - -template -std::size_t chan_impl::recv_count(ipc::handle_t h) { - return detail_impl>::recv_count(h); -} - -template -bool chan_impl::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - return detail_impl>::wait_for_recv(h, r_count, tm); -} - -template -bool chan_impl::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::send(h, data, size, tm); -} - -template -buff_t chan_impl::recv(ipc::handle_t h, std::uint64_t tm) { - return detail_impl>::recv(h, tm); -} - -template -bool chan_impl::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::try_send(h, data, size, tm); -} - -template -buff_t chan_impl::try_recv(ipc::handle_t h) { - return detail_impl>::try_recv(h); -} - -template struct chan_impl>; -// template struct chan_impl>; // TBD -// template struct chan_impl>; // TBD -template struct chan_impl>; -template struct chan_impl>; - -} // namespace ipc diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/longcode/prod_cons.h b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/longcode/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/longcode/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_k_upscaler_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_k_upscaler_to_diffusers.py deleted file mode 100644 index 62abedd737855ca0b0bc9abb75c9b6fb91d5bde2..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_k_upscaler_to_diffusers.py +++ /dev/null @@ -1,297 +0,0 @@ -import argparse - -import huggingface_hub -import k_diffusion as K -import torch - -from diffusers import UNet2DConditionModel - - -UPSCALER_REPO = "pcuenq/k-upscaler" - - -def resnet_to_diffusers_checkpoint(resnet, checkpoint, *, diffusers_resnet_prefix, resnet_prefix): - rv = { - # norm1 - f"{diffusers_resnet_prefix}.norm1.linear.weight": checkpoint[f"{resnet_prefix}.main.0.mapper.weight"], - f"{diffusers_resnet_prefix}.norm1.linear.bias": checkpoint[f"{resnet_prefix}.main.0.mapper.bias"], - # conv1 - f"{diffusers_resnet_prefix}.conv1.weight": checkpoint[f"{resnet_prefix}.main.2.weight"], - f"{diffusers_resnet_prefix}.conv1.bias": checkpoint[f"{resnet_prefix}.main.2.bias"], - # norm2 - f"{diffusers_resnet_prefix}.norm2.linear.weight": checkpoint[f"{resnet_prefix}.main.4.mapper.weight"], - f"{diffusers_resnet_prefix}.norm2.linear.bias": checkpoint[f"{resnet_prefix}.main.4.mapper.bias"], - # conv2 - f"{diffusers_resnet_prefix}.conv2.weight": checkpoint[f"{resnet_prefix}.main.6.weight"], - f"{diffusers_resnet_prefix}.conv2.bias": checkpoint[f"{resnet_prefix}.main.6.bias"], - } - - if resnet.conv_shortcut is not None: - rv.update( - { - f"{diffusers_resnet_prefix}.conv_shortcut.weight": checkpoint[f"{resnet_prefix}.skip.weight"], - } - ) - - return rv - - -def self_attn_to_diffusers_checkpoint(checkpoint, *, diffusers_attention_prefix, attention_prefix): - weight_q, weight_k, weight_v = checkpoint[f"{attention_prefix}.qkv_proj.weight"].chunk(3, dim=0) - bias_q, bias_k, bias_v = checkpoint[f"{attention_prefix}.qkv_proj.bias"].chunk(3, dim=0) - rv = { - # norm - f"{diffusers_attention_prefix}.norm1.linear.weight": checkpoint[f"{attention_prefix}.norm_in.mapper.weight"], - f"{diffusers_attention_prefix}.norm1.linear.bias": checkpoint[f"{attention_prefix}.norm_in.mapper.bias"], - # to_q - f"{diffusers_attention_prefix}.attn1.to_q.weight": weight_q.squeeze(-1).squeeze(-1), - f"{diffusers_attention_prefix}.attn1.to_q.bias": bias_q, - # to_k - f"{diffusers_attention_prefix}.attn1.to_k.weight": weight_k.squeeze(-1).squeeze(-1), - f"{diffusers_attention_prefix}.attn1.to_k.bias": bias_k, - # to_v - f"{diffusers_attention_prefix}.attn1.to_v.weight": weight_v.squeeze(-1).squeeze(-1), - f"{diffusers_attention_prefix}.attn1.to_v.bias": bias_v, - # to_out - f"{diffusers_attention_prefix}.attn1.to_out.0.weight": checkpoint[f"{attention_prefix}.out_proj.weight"] - .squeeze(-1) - .squeeze(-1), - f"{diffusers_attention_prefix}.attn1.to_out.0.bias": checkpoint[f"{attention_prefix}.out_proj.bias"], - } - - return rv - - -def cross_attn_to_diffusers_checkpoint( - checkpoint, *, diffusers_attention_prefix, diffusers_attention_index, attention_prefix -): - weight_k, weight_v = checkpoint[f"{attention_prefix}.kv_proj.weight"].chunk(2, dim=0) - bias_k, bias_v = checkpoint[f"{attention_prefix}.kv_proj.bias"].chunk(2, dim=0) - - rv = { - # norm2 (ada groupnorm) - f"{diffusers_attention_prefix}.norm{diffusers_attention_index}.linear.weight": checkpoint[ - f"{attention_prefix}.norm_dec.mapper.weight" - ], - f"{diffusers_attention_prefix}.norm{diffusers_attention_index}.linear.bias": checkpoint[ - f"{attention_prefix}.norm_dec.mapper.bias" - ], - # layernorm on encoder_hidden_state - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.norm_cross.weight": checkpoint[ - f"{attention_prefix}.norm_enc.weight" - ], - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.norm_cross.bias": checkpoint[ - f"{attention_prefix}.norm_enc.bias" - ], - # to_q - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.to_q.weight": checkpoint[ - f"{attention_prefix}.q_proj.weight" - ] - .squeeze(-1) - .squeeze(-1), - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.to_q.bias": checkpoint[ - f"{attention_prefix}.q_proj.bias" - ], - # to_k - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.to_k.weight": weight_k.squeeze(-1).squeeze(-1), - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.to_k.bias": bias_k, - # to_v - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.to_v.weight": weight_v.squeeze(-1).squeeze(-1), - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.to_v.bias": bias_v, - # to_out - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.to_out.0.weight": checkpoint[ - f"{attention_prefix}.out_proj.weight" - ] - .squeeze(-1) - .squeeze(-1), - f"{diffusers_attention_prefix}.attn{diffusers_attention_index}.to_out.0.bias": checkpoint[ - f"{attention_prefix}.out_proj.bias" - ], - } - - return rv - - -def block_to_diffusers_checkpoint(block, checkpoint, block_idx, block_type): - block_prefix = "inner_model.u_net.u_blocks" if block_type == "up" else "inner_model.u_net.d_blocks" - block_prefix = f"{block_prefix}.{block_idx}" - - diffusers_checkpoint = {} - - if not hasattr(block, "attentions"): - n = 1 # resnet only - elif not block.attentions[0].add_self_attention: - n = 2 # resnet -> cross-attention - else: - n = 3 # resnet -> self-attention -> cross-attention) - - for resnet_idx, resnet in enumerate(block.resnets): - # diffusers_resnet_prefix = f"{diffusers_up_block_prefix}.resnets.{resnet_idx}" - diffusers_resnet_prefix = f"{block_type}_blocks.{block_idx}.resnets.{resnet_idx}" - idx = n * resnet_idx if block_type == "up" else n * resnet_idx + 1 - resnet_prefix = f"{block_prefix}.{idx}" if block_type == "up" else f"{block_prefix}.{idx}" - - diffusers_checkpoint.update( - resnet_to_diffusers_checkpoint( - resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix - ) - ) - - if hasattr(block, "attentions"): - for attention_idx, attention in enumerate(block.attentions): - diffusers_attention_prefix = f"{block_type}_blocks.{block_idx}.attentions.{attention_idx}" - idx = n * attention_idx + 1 if block_type == "up" else n * attention_idx + 2 - self_attention_prefix = f"{block_prefix}.{idx}" - cross_attention_prefix = f"{block_prefix}.{idx }" - cross_attention_index = 1 if not attention.add_self_attention else 2 - idx = ( - n * attention_idx + cross_attention_index - if block_type == "up" - else n * attention_idx + cross_attention_index + 1 - ) - cross_attention_prefix = f"{block_prefix}.{idx }" - - diffusers_checkpoint.update( - cross_attn_to_diffusers_checkpoint( - checkpoint, - diffusers_attention_prefix=diffusers_attention_prefix, - diffusers_attention_index=2, - attention_prefix=cross_attention_prefix, - ) - ) - - if attention.add_self_attention is True: - diffusers_checkpoint.update( - self_attn_to_diffusers_checkpoint( - checkpoint, - diffusers_attention_prefix=diffusers_attention_prefix, - attention_prefix=self_attention_prefix, - ) - ) - - return diffusers_checkpoint - - -def unet_to_diffusers_checkpoint(model, checkpoint): - diffusers_checkpoint = {} - - # pre-processing - diffusers_checkpoint.update( - { - "conv_in.weight": checkpoint["inner_model.proj_in.weight"], - "conv_in.bias": checkpoint["inner_model.proj_in.bias"], - } - ) - - # timestep and class embedding - diffusers_checkpoint.update( - { - "time_proj.weight": checkpoint["inner_model.timestep_embed.weight"].squeeze(-1), - "time_embedding.linear_1.weight": checkpoint["inner_model.mapping.0.weight"], - "time_embedding.linear_1.bias": checkpoint["inner_model.mapping.0.bias"], - "time_embedding.linear_2.weight": checkpoint["inner_model.mapping.2.weight"], - "time_embedding.linear_2.bias": checkpoint["inner_model.mapping.2.bias"], - "time_embedding.cond_proj.weight": checkpoint["inner_model.mapping_cond.weight"], - } - ) - - # down_blocks - for down_block_idx, down_block in enumerate(model.down_blocks): - diffusers_checkpoint.update(block_to_diffusers_checkpoint(down_block, checkpoint, down_block_idx, "down")) - - # up_blocks - for up_block_idx, up_block in enumerate(model.up_blocks): - diffusers_checkpoint.update(block_to_diffusers_checkpoint(up_block, checkpoint, up_block_idx, "up")) - - # post-processing - diffusers_checkpoint.update( - { - "conv_out.weight": checkpoint["inner_model.proj_out.weight"], - "conv_out.bias": checkpoint["inner_model.proj_out.bias"], - } - ) - - return diffusers_checkpoint - - -def unet_model_from_original_config(original_config): - in_channels = original_config["input_channels"] + original_config["unet_cond_dim"] - out_channels = original_config["input_channels"] + (1 if original_config["has_variance"] else 0) - - block_out_channels = original_config["channels"] - - assert ( - len(set(original_config["depths"])) == 1 - ), "UNet2DConditionModel currently do not support blocks with different number of layers" - layers_per_block = original_config["depths"][0] - - class_labels_dim = original_config["mapping_cond_dim"] - cross_attention_dim = original_config["cross_cond_dim"] - - attn1_types = [] - attn2_types = [] - for s, c in zip(original_config["self_attn_depths"], original_config["cross_attn_depths"]): - if s: - a1 = "self" - a2 = "cross" if c else None - elif c: - a1 = "cross" - a2 = None - else: - a1 = None - a2 = None - attn1_types.append(a1) - attn2_types.append(a2) - - unet = UNet2DConditionModel( - in_channels=in_channels, - out_channels=out_channels, - down_block_types=("KDownBlock2D", "KCrossAttnDownBlock2D", "KCrossAttnDownBlock2D", "KCrossAttnDownBlock2D"), - mid_block_type=None, - up_block_types=("KCrossAttnUpBlock2D", "KCrossAttnUpBlock2D", "KCrossAttnUpBlock2D", "KUpBlock2D"), - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn="gelu", - norm_num_groups=None, - cross_attention_dim=cross_attention_dim, - attention_head_dim=64, - time_cond_proj_dim=class_labels_dim, - resnet_time_scale_shift="scale_shift", - time_embedding_type="fourier", - timestep_post_act="gelu", - conv_in_kernel=1, - conv_out_kernel=1, - ) - - return unet - - -def main(args): - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - orig_config_path = huggingface_hub.hf_hub_download(UPSCALER_REPO, "config_laion_text_cond_latent_upscaler_2.json") - orig_weights_path = huggingface_hub.hf_hub_download( - UPSCALER_REPO, "laion_text_cond_latent_upscaler_2_1_00470000_slim.pth" - ) - print(f"loading original model configuration from {orig_config_path}") - print(f"loading original model checkpoint from {orig_weights_path}") - - print("converting to diffusers unet") - orig_config = K.config.load_config(open(orig_config_path))["model"] - model = unet_model_from_original_config(orig_config) - - orig_checkpoint = torch.load(orig_weights_path, map_location=device)["model_ema"] - converted_checkpoint = unet_to_diffusers_checkpoint(model, orig_checkpoint) - - model.load_state_dict(converted_checkpoint, strict=True) - model.save_pretrained(args.dump_path) - print(f"saving converted unet model in {args.dump_path}") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - args = parser.parse_args() - - main(args) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_dummies.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_dummies.py deleted file mode 100644 index 16b7c8c117dc453f0956d6318d217c3395af7792..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_dummies.py +++ /dev/null @@ -1,172 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import os -import re - - -# All paths are set with the intent you should run this script from the root of the repo with the command -# python utils/check_dummies.py -PATH_TO_DIFFUSERS = "src/diffusers" - -# Matches is_xxx_available() -_re_backend = re.compile(r"is\_([a-z_]*)_available\(\)") -# Matches from xxx import bla -_re_single_line_import = re.compile(r"\s+from\s+\S*\s+import\s+([^\(\s].*)\n") - - -DUMMY_CONSTANT = """ -{0} = None -""" - -DUMMY_CLASS = """ -class {0}(metaclass=DummyObject): - _backends = {1} - - def __init__(self, *args, **kwargs): - requires_backends(self, {1}) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, {1}) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, {1}) -""" - - -DUMMY_FUNCTION = """ -def {0}(*args, **kwargs): - requires_backends({0}, {1}) -""" - - -def find_backend(line): - """Find one (or multiple) backend in a code line of the init.""" - backends = _re_backend.findall(line) - if len(backends) == 0: - return None - - return "_and_".join(backends) - - -def read_init(): - """Read the init and extracts PyTorch, TensorFlow, SentencePiece and Tokenizers objects.""" - with open(os.path.join(PATH_TO_DIFFUSERS, "__init__.py"), "r", encoding="utf-8", newline="\n") as f: - lines = f.readlines() - - # Get to the point we do the actual imports for type checking - line_index = 0 - backend_specific_objects = {} - # Go through the end of the file - while line_index < len(lines): - # If the line contains is_backend_available, we grab all objects associated with the `else` block - backend = find_backend(lines[line_index]) - if backend is not None: - while not lines[line_index].startswith("else:"): - line_index += 1 - line_index += 1 - objects = [] - # Until we unindent, add backend objects to the list - while line_index < len(lines) and len(lines[line_index]) > 1: - line = lines[line_index] - single_line_import_search = _re_single_line_import.search(line) - if single_line_import_search is not None: - objects.extend(single_line_import_search.groups()[0].split(", ")) - elif line.startswith(" " * 8): - objects.append(line[8:-2]) - line_index += 1 - - if len(objects) > 0: - backend_specific_objects[backend] = objects - else: - line_index += 1 - - return backend_specific_objects - - -def create_dummy_object(name, backend_name): - """Create the code for the dummy object corresponding to `name`.""" - if name.isupper(): - return DUMMY_CONSTANT.format(name) - elif name.islower(): - return DUMMY_FUNCTION.format(name, backend_name) - else: - return DUMMY_CLASS.format(name, backend_name) - - -def create_dummy_files(backend_specific_objects=None): - """Create the content of the dummy files.""" - if backend_specific_objects is None: - backend_specific_objects = read_init() - # For special correspondence backend to module name as used in the function requires_modulename - dummy_files = {} - - for backend, objects in backend_specific_objects.items(): - backend_name = "[" + ", ".join(f'"{b}"' for b in backend.split("_and_")) + "]" - dummy_file = "# This file is autogenerated by the command `make fix-copies`, do not edit.\n" - dummy_file += "from ..utils import DummyObject, requires_backends\n\n" - dummy_file += "\n".join([create_dummy_object(o, backend_name) for o in objects]) - dummy_files[backend] = dummy_file - - return dummy_files - - -def check_dummies(overwrite=False): - """Check if the dummy files are up to date and maybe `overwrite` with the right content.""" - dummy_files = create_dummy_files() - # For special correspondence backend to shortcut as used in utils/dummy_xxx_objects.py - short_names = {"torch": "pt"} - - # Locate actual dummy modules and read their content. - path = os.path.join(PATH_TO_DIFFUSERS, "utils") - dummy_file_paths = { - backend: os.path.join(path, f"dummy_{short_names.get(backend, backend)}_objects.py") - for backend in dummy_files.keys() - } - - actual_dummies = {} - for backend, file_path in dummy_file_paths.items(): - if os.path.isfile(file_path): - with open(file_path, "r", encoding="utf-8", newline="\n") as f: - actual_dummies[backend] = f.read() - else: - actual_dummies[backend] = "" - - for backend in dummy_files.keys(): - if dummy_files[backend] != actual_dummies[backend]: - if overwrite: - print( - f"Updating diffusers.utils.dummy_{short_names.get(backend, backend)}_objects.py as the main " - "__init__ has new objects." - ) - with open(dummy_file_paths[backend], "w", encoding="utf-8", newline="\n") as f: - f.write(dummy_files[backend]) - else: - raise ValueError( - "The main __init__ has objects that are not present in " - f"diffusers.utils.dummy_{short_names.get(backend, backend)}_objects.py. Run `make fix-copies` " - "to fix this." - ) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") - args = parser.parse_args() - - check_dummies(args.fix_and_overwrite) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py deleted file mode 100644 index 3b3683af235f46df36d8793e52c2b9c52e0defeb..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_480x480_40k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_480x480_40k_pascal_context_59.py deleted file mode 100644 index 655b4604677b3d4c5eb155e8b2f1cdacbd4194d5..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_480x480_40k_pascal_context_59.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = './fcn_hr18_480x480_40k_pascal_context_59.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w48', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(48, 96)), - stage3=dict(num_channels=(48, 96, 192)), - stage4=dict(num_channels=(48, 96, 192, 384)))), - decode_head=dict( - in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384]))) diff --git a/spaces/Andyrasika/xlm-roberta-base-finetuned-panx-de/README.md b/spaces/Andyrasika/xlm-roberta-base-finetuned-panx-de/README.md deleted file mode 100644 index 6179aac6198d51dc90bbc23e56c4f646fa6b779b..0000000000000000000000000000000000000000 --- a/spaces/Andyrasika/xlm-roberta-base-finetuned-panx-de/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Xlm Roberta Base Finetuned Panx De -emoji: 🌍 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AnimalEquality/chatbot/scripts/nbdev_prepare_modded.sh b/spaces/AnimalEquality/chatbot/scripts/nbdev_prepare_modded.sh deleted file mode 100644 index 3079e40ec8fabf3a56d9858d42853e85f893c7a8..0000000000000000000000000000000000000000 --- a/spaces/AnimalEquality/chatbot/scripts/nbdev_prepare_modded.sh +++ /dev/null @@ -1,4 +0,0 @@ -#!/bin/bash -# Run from root dir -nbdev_prepare -scripts/nbdev_readme_patch_hface.sh \ No newline at end of file diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/ctransformers_model.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/ctransformers_model.py deleted file mode 100644 index 70ce92f54bdb6704ce58f013fd28b0014a5f3b28..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/ctransformers_model.py +++ /dev/null @@ -1,79 +0,0 @@ -from ctransformers import AutoConfig, AutoModelForCausalLM - -from modules import shared -from modules.callbacks import Iteratorize -from modules.logging_colors import logger - - -class CtransformersModel: - def __init__(self): - pass - - @classmethod - def from_pretrained(cls, path): - result = cls() - - config = AutoConfig.from_pretrained( - str(path), - threads=shared.args.threads if shared.args.threads != 0 else -1, - gpu_layers=shared.args.n_gpu_layers, - batch_size=shared.args.n_batch, - context_length=shared.args.n_ctx, - stream=True, - mmap=not shared.args.no_mmap, - mlock=shared.args.mlock - ) - - result.model = AutoModelForCausalLM.from_pretrained( - str(result.model_dir(path) if result.model_type_is_auto() else path), - model_type=(None if result.model_type_is_auto() else shared.args.model_type), - config=config - ) - - logger.info(f'Using ctransformers model_type: {result.model.model_type} for {result.model.model_path}') - return result, result - - def model_type_is_auto(self): - return shared.args.model_type is None or shared.args.model_type == "Auto" or shared.args.model_type == "None" - - def model_dir(self, path): - if path.is_file(): - return path.parent - - return path - - def encode(self, string, **kwargs): - return self.model.tokenize(string) - - def decode(self, ids): - return self.model.detokenize(ids) - - def generate(self, prompt, state, callback=None): - prompt = prompt if type(prompt) is str else prompt.decode() - # ctransformers uses -1 for random seed - generator = self.model( - prompt=prompt, - max_new_tokens=state['max_new_tokens'], - temperature=state['temperature'], - top_p=state['top_p'], - top_k=state['top_k'], - repetition_penalty=state['repetition_penalty'], - last_n_tokens=state['repetition_penalty_range'], - seed=int(state['seed']) - ) - - output = "" - for token in generator: - if callback: - callback(token) - - output += token - - return output - - def generate_with_streaming(self, *args, **kwargs): - with Iteratorize(self.generate, args, kwargs, callback=None) as generator: - reply = '' - for token in generator: - reply += token - yield reply diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/ade.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/ade.py deleted file mode 100644 index 5913e43775ed4920b6934c855eb5a37c54218ebf..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/ade.py +++ /dev/null @@ -1,84 +0,0 @@ -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ADE20KDataset(CustomDataset): - """ADE20K dataset. - - In segmentation map annotation for ADE20K, 0 stands for background, which - is not included in 150 categories. ``reduce_zero_label`` is fixed to True. - The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to - '.png'. - """ - CLASSES = ( - 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', - 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', - 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', - 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', - 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', - 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', - 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', - 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', - 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', - 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', - 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', - 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', - 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', - 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', - 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', - 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', - 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', - 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', - 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', - 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', - 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', - 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', - 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', - 'clock', 'flag') - - PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - def __init__(self, **kwargs): - super(ADE20KDataset, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - reduce_zero_label=True, - **kwargs) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/data/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Apex-X/GODROOP/roop/utilities.py b/spaces/Apex-X/GODROOP/roop/utilities.py deleted file mode 100644 index 90c8d981f5f159a459ca0c08cc23dfac8d04c068..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/GODROOP/roop/utilities.py +++ /dev/null @@ -1,141 +0,0 @@ -import glob -import mimetypes -import os -import platform -import shutil -import ssl -import subprocess -import urllib -from pathlib import Path -from typing import List, Any -from tqdm import tqdm - -import roop.globals - -TEMP_FILE = 'temp.mp4' -TEMP_DIRECTORY = 'temp' - -# monkey patch ssl for mac -if platform.system().lower() == 'darwin': - ssl._create_default_https_context = ssl._create_unverified_context - - -def run_ffmpeg(args: List[str]) -> bool: - commands = ['ffmpeg', '-hide_banner', '-hwaccel', 'auto', '-loglevel', roop.globals.log_level] - commands.extend(args) - try: - subprocess.check_output(commands, stderr=subprocess.STDOUT) - return True - except Exception: - pass - return False - - -def detect_fps(target_path: str) -> float: - command = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', target_path] - output = subprocess.check_output(command).decode().strip().split('/') - try: - numerator, denominator = map(int, output) - return numerator / denominator - except Exception: - pass - return 30.0 - - -def extract_frames(target_path: str) -> None: - temp_directory_path = get_temp_directory_path(target_path) - run_ffmpeg(['-i', target_path, '-pix_fmt', 'rgb24', os.path.join(temp_directory_path, '%04d.png')]) - - -def create_video(target_path: str, fps: float = 30.0) -> None: - temp_output_path = get_temp_output_path(target_path) - temp_directory_path = get_temp_directory_path(target_path) - run_ffmpeg(['-r', str(fps), '-i', os.path.join(temp_directory_path, '%04d.png'), '-c:v', roop.globals.video_encoder, '-crf', str(roop.globals.video_quality), '-pix_fmt', 'yuv420p', '-vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', '-y', temp_output_path]) - - -def restore_audio(target_path: str, output_path: str) -> None: - temp_output_path = get_temp_output_path(target_path) - done = run_ffmpeg(['-i', temp_output_path, '-i', target_path, '-c:v', 'copy', '-map', '0:v:0', '-map', '1:a:0', '-y', output_path]) - if not done: - move_temp(target_path, output_path) - - -def get_temp_frame_paths(target_path: str) -> List[str]: - temp_directory_path = get_temp_directory_path(target_path) - return glob.glob((os.path.join(glob.escape(temp_directory_path), '*.png'))) - - -def get_temp_directory_path(target_path: str) -> str: - target_name, _ = os.path.splitext(os.path.basename(target_path)) - target_directory_path = os.path.dirname(target_path) - return os.path.join(target_directory_path, TEMP_DIRECTORY, target_name) - - -def get_temp_output_path(target_path: str) -> str: - temp_directory_path = get_temp_directory_path(target_path) - return os.path.join(temp_directory_path, TEMP_FILE) - - -def normalize_output_path(source_path: str, target_path: str, output_path: str) -> Any: - if source_path and target_path: - source_name, _ = os.path.splitext(os.path.basename(source_path)) - target_name, target_extension = os.path.splitext(os.path.basename(target_path)) - if os.path.isdir(output_path): - return os.path.join(output_path, source_name + '-' + target_name + target_extension) - return output_path - - -def create_temp(target_path: str) -> None: - temp_directory_path = get_temp_directory_path(target_path) - Path(temp_directory_path).mkdir(parents=True, exist_ok=True) - - -def move_temp(target_path: str, output_path: str) -> None: - temp_output_path = get_temp_output_path(target_path) - if os.path.isfile(temp_output_path): - if os.path.isfile(output_path): - os.remove(output_path) - shutil.move(temp_output_path, output_path) - - -def clean_temp(target_path: str) -> None: - temp_directory_path = get_temp_directory_path(target_path) - parent_directory_path = os.path.dirname(temp_directory_path) - if not roop.globals.keep_frames and os.path.isdir(temp_directory_path): - shutil.rmtree(temp_directory_path) - if os.path.exists(parent_directory_path) and not os.listdir(parent_directory_path): - os.rmdir(parent_directory_path) - - -def has_image_extension(image_path: str) -> bool: - return image_path.lower().endswith(('png', 'jpg', 'jpeg', 'webp')) - - -def is_image(image_path: str) -> bool: - if image_path and os.path.isfile(image_path): - mimetype, _ = mimetypes.guess_type(image_path) - return bool(mimetype and mimetype.startswith('image/')) - return False - - -def is_video(video_path: str) -> bool: - if video_path and os.path.isfile(video_path): - mimetype, _ = mimetypes.guess_type(video_path) - return bool(mimetype and mimetype.startswith('video/')) - return False - - -def conditional_download(download_directory_path: str, urls: List[str]) -> None: - if not os.path.exists(download_directory_path): - os.makedirs(download_directory_path) - for url in urls: - download_file_path = os.path.join(download_directory_path, os.path.basename(url)) - if not os.path.exists(download_file_path): - request = urllib.request.urlopen(url) # type: ignore[attr-defined] - total = int(request.headers.get('Content-Length', 0)) - with tqdm(total=total, desc='Downloading', unit='B', unit_scale=True, unit_divisor=1024) as progress: - urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined] - - -def resolve_relative_path(path: str) -> str: - return os.path.abspath(os.path.join(os.path.dirname(__file__), path)) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/_internal_utils.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/_internal_utils.py deleted file mode 100644 index 7dc9bc53360e95abfa99fe1ebd205a3d3ac620e6..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/_internal_utils.py +++ /dev/null @@ -1,48 +0,0 @@ -""" -requests._internal_utils -~~~~~~~~~~~~~~ - -Provides utility functions that are consumed internally by Requests -which depend on extremely few external helpers (such as compat) -""" -import re - -from .compat import builtin_str - -_VALID_HEADER_NAME_RE_BYTE = re.compile(rb"^[^:\s][^:\r\n]*$") -_VALID_HEADER_NAME_RE_STR = re.compile(r"^[^:\s][^:\r\n]*$") -_VALID_HEADER_VALUE_RE_BYTE = re.compile(rb"^\S[^\r\n]*$|^$") -_VALID_HEADER_VALUE_RE_STR = re.compile(r"^\S[^\r\n]*$|^$") - -HEADER_VALIDATORS = { - bytes: (_VALID_HEADER_NAME_RE_BYTE, _VALID_HEADER_VALUE_RE_BYTE), - str: (_VALID_HEADER_NAME_RE_STR, _VALID_HEADER_VALUE_RE_STR), -} - - -def to_native_string(string, encoding="ascii"): - """Given a string object, regardless of type, returns a representation of - that string in the native string type, encoding and decoding where - necessary. This assumes ASCII unless told otherwise. - """ - if isinstance(string, builtin_str): - out = string - else: - out = string.decode(encoding) - - return out - - -def unicode_is_ascii(u_string): - """Determine if unicode string only contains ASCII characters. - - :param str u_string: unicode string to check. Must be unicode - and not Python 2 `str`. - :rtype: bool - """ - assert isinstance(u_string, str) - try: - u_string.encode("ascii") - return True - except UnicodeEncodeError: - return False diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/aspp.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/aspp.py deleted file mode 100644 index 14861aa9ede4fea6a69a49f189bcab997b558148..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/aspp.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from copy import deepcopy -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from .batch_norm import get_norm -from .blocks import DepthwiseSeparableConv2d -from .wrappers import Conv2d - - -class ASPP(nn.Module): - """ - Atrous Spatial Pyramid Pooling (ASPP). - """ - - def __init__( - self, - in_channels, - out_channels, - dilations, - *, - norm, - activation, - pool_kernel_size=None, - dropout: float = 0.0, - use_depthwise_separable_conv=False, - ): - """ - Args: - in_channels (int): number of input channels for ASPP. - out_channels (int): number of output channels. - dilations (list): a list of 3 dilations in ASPP. - norm (str or callable): normalization for all conv layers. - See :func:`layers.get_norm` for supported format. norm is - applied to all conv layers except the conv following - global average pooling. - activation (callable): activation function. - pool_kernel_size (tuple, list): the average pooling size (kh, kw) - for image pooling layer in ASPP. If set to None, it always - performs global average pooling. If not None, it must be - divisible by the shape of inputs in forward(). It is recommended - to use a fixed input feature size in training, and set this - option to match this size, so that it performs global average - pooling in training, and the size of the pooling window stays - consistent in inference. - dropout (float): apply dropout on the output of ASPP. It is used in - the official DeepLab implementation with a rate of 0.1: - https://github.com/tensorflow/models/blob/21b73d22f3ed05b650e85ac50849408dd36de32e/research/deeplab/model.py#L532 # noqa - use_depthwise_separable_conv (bool): use DepthwiseSeparableConv2d - for 3x3 convs in ASPP, proposed in :paper:`DeepLabV3+`. - """ - super(ASPP, self).__init__() - assert len(dilations) == 3, "ASPP expects 3 dilations, got {}".format(len(dilations)) - self.pool_kernel_size = pool_kernel_size - self.dropout = dropout - use_bias = norm == "" - self.convs = nn.ModuleList() - # conv 1x1 - self.convs.append( - Conv2d( - in_channels, - out_channels, - kernel_size=1, - bias=use_bias, - norm=get_norm(norm, out_channels), - activation=deepcopy(activation), - ) - ) - weight_init.c2_xavier_fill(self.convs[-1]) - # atrous convs - for dilation in dilations: - if use_depthwise_separable_conv: - self.convs.append( - DepthwiseSeparableConv2d( - in_channels, - out_channels, - kernel_size=3, - padding=dilation, - dilation=dilation, - norm1=norm, - activation1=deepcopy(activation), - norm2=norm, - activation2=deepcopy(activation), - ) - ) - else: - self.convs.append( - Conv2d( - in_channels, - out_channels, - kernel_size=3, - padding=dilation, - dilation=dilation, - bias=use_bias, - norm=get_norm(norm, out_channels), - activation=deepcopy(activation), - ) - ) - weight_init.c2_xavier_fill(self.convs[-1]) - # image pooling - # We do not add BatchNorm because the spatial resolution is 1x1, - # the original TF implementation has BatchNorm. - if pool_kernel_size is None: - image_pooling = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)), - ) - else: - image_pooling = nn.Sequential( - nn.AvgPool2d(kernel_size=pool_kernel_size, stride=1), - Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)), - ) - weight_init.c2_xavier_fill(image_pooling[1]) - self.convs.append(image_pooling) - - self.project = Conv2d( - 5 * out_channels, - out_channels, - kernel_size=1, - bias=use_bias, - norm=get_norm(norm, out_channels), - activation=deepcopy(activation), - ) - weight_init.c2_xavier_fill(self.project) - - def forward(self, x): - size = x.shape[-2:] - if self.pool_kernel_size is not None: - if size[0] % self.pool_kernel_size[0] or size[1] % self.pool_kernel_size[1]: - raise ValueError( - "`pool_kernel_size` must be divisible by the shape of inputs. " - "Input size: {} `pool_kernel_size`: {}".format(size, self.pool_kernel_size) - ) - res = [] - for conv in self.convs: - res.append(conv(x)) - res[-1] = F.interpolate(res[-1], size=size, mode="bilinear", align_corners=False) - res = torch.cat(res, dim=1) - res = self.project(res) - res = F.dropout(res, self.dropout, training=self.training) if self.dropout > 0 else res - return res diff --git a/spaces/AzizR/FaceRecognitionGradio/README.md b/spaces/AzizR/FaceRecognitionGradio/README.md deleted file mode 100644 index 25d5043523c8e6f5379202fbb585d53c3b4167a4..0000000000000000000000000000000000000000 --- a/spaces/AzizR/FaceRecognitionGradio/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: FaceRecognitionGradio -emoji: 👁 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Betacuckgpt/ehartford-Wizard-Vicuna-30B-Uncensored123/app.py b/spaces/Betacuckgpt/ehartford-Wizard-Vicuna-30B-Uncensored123/app.py deleted file mode 100644 index 4cdd13923578027e405184827b4f353131ce7341..0000000000000000000000000000000000000000 --- a/spaces/Betacuckgpt/ehartford-Wizard-Vicuna-30B-Uncensored123/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/ehartford/Wizard-Vicuna-30B-Uncensored").launch() \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/hebrewprober.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/hebrewprober.py deleted file mode 100644 index 785d0057bcc0ea74a4b8d65ab7a0de78474bf892..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/hebrewprober.py +++ /dev/null @@ -1,316 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Universal charset detector code. -# -# The Initial Developer of the Original Code is -# Shy Shalom -# Portions created by the Initial Developer are Copyright (C) 2005 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from typing import Optional, Union - -from .charsetprober import CharSetProber -from .enums import ProbingState -from .sbcharsetprober import SingleByteCharSetProber - -# This prober doesn't actually recognize a language or a charset. -# It is a helper prober for the use of the Hebrew model probers - -### General ideas of the Hebrew charset recognition ### -# -# Four main charsets exist in Hebrew: -# "ISO-8859-8" - Visual Hebrew -# "windows-1255" - Logical Hebrew -# "ISO-8859-8-I" - Logical Hebrew -# "x-mac-hebrew" - ?? Logical Hebrew ?? -# -# Both "ISO" charsets use a completely identical set of code points, whereas -# "windows-1255" and "x-mac-hebrew" are two different proper supersets of -# these code points. windows-1255 defines additional characters in the range -# 0x80-0x9F as some misc punctuation marks as well as some Hebrew-specific -# diacritics and additional 'Yiddish' ligature letters in the range 0xc0-0xd6. -# x-mac-hebrew defines similar additional code points but with a different -# mapping. -# -# As far as an average Hebrew text with no diacritics is concerned, all four -# charsets are identical with respect to code points. Meaning that for the -# main Hebrew alphabet, all four map the same values to all 27 Hebrew letters -# (including final letters). -# -# The dominant difference between these charsets is their directionality. -# "Visual" directionality means that the text is ordered as if the renderer is -# not aware of a BIDI rendering algorithm. The renderer sees the text and -# draws it from left to right. The text itself when ordered naturally is read -# backwards. A buffer of Visual Hebrew generally looks like so: -# "[last word of first line spelled backwards] [whole line ordered backwards -# and spelled backwards] [first word of first line spelled backwards] -# [end of line] [last word of second line] ... etc' " -# adding punctuation marks, numbers and English text to visual text is -# naturally also "visual" and from left to right. -# -# "Logical" directionality means the text is ordered "naturally" according to -# the order it is read. It is the responsibility of the renderer to display -# the text from right to left. A BIDI algorithm is used to place general -# punctuation marks, numbers and English text in the text. -# -# Texts in x-mac-hebrew are almost impossible to find on the Internet. From -# what little evidence I could find, it seems that its general directionality -# is Logical. -# -# To sum up all of the above, the Hebrew probing mechanism knows about two -# charsets: -# Visual Hebrew - "ISO-8859-8" - backwards text - Words and sentences are -# backwards while line order is natural. For charset recognition purposes -# the line order is unimportant (In fact, for this implementation, even -# word order is unimportant). -# Logical Hebrew - "windows-1255" - normal, naturally ordered text. -# -# "ISO-8859-8-I" is a subset of windows-1255 and doesn't need to be -# specifically identified. -# "x-mac-hebrew" is also identified as windows-1255. A text in x-mac-hebrew -# that contain special punctuation marks or diacritics is displayed with -# some unconverted characters showing as question marks. This problem might -# be corrected using another model prober for x-mac-hebrew. Due to the fact -# that x-mac-hebrew texts are so rare, writing another model prober isn't -# worth the effort and performance hit. -# -#### The Prober #### -# -# The prober is divided between two SBCharSetProbers and a HebrewProber, -# all of which are managed, created, fed data, inquired and deleted by the -# SBCSGroupProber. The two SBCharSetProbers identify that the text is in -# fact some kind of Hebrew, Logical or Visual. The final decision about which -# one is it is made by the HebrewProber by combining final-letter scores -# with the scores of the two SBCharSetProbers to produce a final answer. -# -# The SBCSGroupProber is responsible for stripping the original text of HTML -# tags, English characters, numbers, low-ASCII punctuation characters, spaces -# and new lines. It reduces any sequence of such characters to a single space. -# The buffer fed to each prober in the SBCS group prober is pure text in -# high-ASCII. -# The two SBCharSetProbers (model probers) share the same language model: -# Win1255Model. -# The first SBCharSetProber uses the model normally as any other -# SBCharSetProber does, to recognize windows-1255, upon which this model was -# built. The second SBCharSetProber is told to make the pair-of-letter -# lookup in the language model backwards. This in practice exactly simulates -# a visual Hebrew model using the windows-1255 logical Hebrew model. -# -# The HebrewProber is not using any language model. All it does is look for -# final-letter evidence suggesting the text is either logical Hebrew or visual -# Hebrew. Disjointed from the model probers, the results of the HebrewProber -# alone are meaningless. HebrewProber always returns 0.00 as confidence -# since it never identifies a charset by itself. Instead, the pointer to the -# HebrewProber is passed to the model probers as a helper "Name Prober". -# When the Group prober receives a positive identification from any prober, -# it asks for the name of the charset identified. If the prober queried is a -# Hebrew model prober, the model prober forwards the call to the -# HebrewProber to make the final decision. In the HebrewProber, the -# decision is made according to the final-letters scores maintained and Both -# model probers scores. The answer is returned in the form of the name of the -# charset identified, either "windows-1255" or "ISO-8859-8". - - -class HebrewProber(CharSetProber): - SPACE = 0x20 - # windows-1255 / ISO-8859-8 code points of interest - FINAL_KAF = 0xEA - NORMAL_KAF = 0xEB - FINAL_MEM = 0xED - NORMAL_MEM = 0xEE - FINAL_NUN = 0xEF - NORMAL_NUN = 0xF0 - FINAL_PE = 0xF3 - NORMAL_PE = 0xF4 - FINAL_TSADI = 0xF5 - NORMAL_TSADI = 0xF6 - - # Minimum Visual vs Logical final letter score difference. - # If the difference is below this, don't rely solely on the final letter score - # distance. - MIN_FINAL_CHAR_DISTANCE = 5 - - # Minimum Visual vs Logical model score difference. - # If the difference is below this, don't rely at all on the model score - # distance. - MIN_MODEL_DISTANCE = 0.01 - - VISUAL_HEBREW_NAME = "ISO-8859-8" - LOGICAL_HEBREW_NAME = "windows-1255" - - def __init__(self) -> None: - super().__init__() - self._final_char_logical_score = 0 - self._final_char_visual_score = 0 - self._prev = self.SPACE - self._before_prev = self.SPACE - self._logical_prober: Optional[SingleByteCharSetProber] = None - self._visual_prober: Optional[SingleByteCharSetProber] = None - self.reset() - - def reset(self) -> None: - self._final_char_logical_score = 0 - self._final_char_visual_score = 0 - # The two last characters seen in the previous buffer, - # mPrev and mBeforePrev are initialized to space in order to simulate - # a word delimiter at the beginning of the data - self._prev = self.SPACE - self._before_prev = self.SPACE - # These probers are owned by the group prober. - - def set_model_probers( - self, - logical_prober: SingleByteCharSetProber, - visual_prober: SingleByteCharSetProber, - ) -> None: - self._logical_prober = logical_prober - self._visual_prober = visual_prober - - def is_final(self, c: int) -> bool: - return c in [ - self.FINAL_KAF, - self.FINAL_MEM, - self.FINAL_NUN, - self.FINAL_PE, - self.FINAL_TSADI, - ] - - def is_non_final(self, c: int) -> bool: - # The normal Tsadi is not a good Non-Final letter due to words like - # 'lechotet' (to chat) containing an apostrophe after the tsadi. This - # apostrophe is converted to a space in FilterWithoutEnglishLetters - # causing the Non-Final tsadi to appear at an end of a word even - # though this is not the case in the original text. - # The letters Pe and Kaf rarely display a related behavior of not being - # a good Non-Final letter. Words like 'Pop', 'Winamp' and 'Mubarak' - # for example legally end with a Non-Final Pe or Kaf. However, the - # benefit of these letters as Non-Final letters outweighs the damage - # since these words are quite rare. - return c in [self.NORMAL_KAF, self.NORMAL_MEM, self.NORMAL_NUN, self.NORMAL_PE] - - def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState: - # Final letter analysis for logical-visual decision. - # Look for evidence that the received buffer is either logical Hebrew - # or visual Hebrew. - # The following cases are checked: - # 1) A word longer than 1 letter, ending with a final letter. This is - # an indication that the text is laid out "naturally" since the - # final letter really appears at the end. +1 for logical score. - # 2) A word longer than 1 letter, ending with a Non-Final letter. In - # normal Hebrew, words ending with Kaf, Mem, Nun, Pe or Tsadi, - # should not end with the Non-Final form of that letter. Exceptions - # to this rule are mentioned above in isNonFinal(). This is an - # indication that the text is laid out backwards. +1 for visual - # score - # 3) A word longer than 1 letter, starting with a final letter. Final - # letters should not appear at the beginning of a word. This is an - # indication that the text is laid out backwards. +1 for visual - # score. - # - # The visual score and logical score are accumulated throughout the - # text and are finally checked against each other in GetCharSetName(). - # No checking for final letters in the middle of words is done since - # that case is not an indication for either Logical or Visual text. - # - # We automatically filter out all 7-bit characters (replace them with - # spaces) so the word boundary detection works properly. [MAP] - - if self.state == ProbingState.NOT_ME: - # Both model probers say it's not them. No reason to continue. - return ProbingState.NOT_ME - - byte_str = self.filter_high_byte_only(byte_str) - - for cur in byte_str: - if cur == self.SPACE: - # We stand on a space - a word just ended - if self._before_prev != self.SPACE: - # next-to-last char was not a space so self._prev is not a - # 1 letter word - if self.is_final(self._prev): - # case (1) [-2:not space][-1:final letter][cur:space] - self._final_char_logical_score += 1 - elif self.is_non_final(self._prev): - # case (2) [-2:not space][-1:Non-Final letter][ - # cur:space] - self._final_char_visual_score += 1 - else: - # Not standing on a space - if ( - (self._before_prev == self.SPACE) - and (self.is_final(self._prev)) - and (cur != self.SPACE) - ): - # case (3) [-2:space][-1:final letter][cur:not space] - self._final_char_visual_score += 1 - self._before_prev = self._prev - self._prev = cur - - # Forever detecting, till the end or until both model probers return - # ProbingState.NOT_ME (handled above) - return ProbingState.DETECTING - - @property - def charset_name(self) -> str: - assert self._logical_prober is not None - assert self._visual_prober is not None - - # Make the decision: is it Logical or Visual? - # If the final letter score distance is dominant enough, rely on it. - finalsub = self._final_char_logical_score - self._final_char_visual_score - if finalsub >= self.MIN_FINAL_CHAR_DISTANCE: - return self.LOGICAL_HEBREW_NAME - if finalsub <= -self.MIN_FINAL_CHAR_DISTANCE: - return self.VISUAL_HEBREW_NAME - - # It's not dominant enough, try to rely on the model scores instead. - modelsub = ( - self._logical_prober.get_confidence() - self._visual_prober.get_confidence() - ) - if modelsub > self.MIN_MODEL_DISTANCE: - return self.LOGICAL_HEBREW_NAME - if modelsub < -self.MIN_MODEL_DISTANCE: - return self.VISUAL_HEBREW_NAME - - # Still no good, back to final letter distance, maybe it'll save the - # day. - if finalsub < 0.0: - return self.VISUAL_HEBREW_NAME - - # (finalsub > 0 - Logical) or (don't know what to do) default to - # Logical. - return self.LOGICAL_HEBREW_NAME - - @property - def language(self) -> str: - return "Hebrew" - - @property - def state(self) -> ProbingState: - assert self._logical_prober is not None - assert self._visual_prober is not None - - # Remain active as long as any of the model probers are active. - if (self._logical_prober.state == ProbingState.NOT_ME) and ( - self._visual_prober.state == ProbingState.NOT_ME - ): - return ProbingState.NOT_ME - return ProbingState.DETECTING diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_exceptions.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_exceptions.py deleted file mode 100644 index d2dddd6a106f021a4723c1e8f5953ccc09e55e1f..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/config/_validate_pyproject/fastjsonschema_exceptions.py +++ /dev/null @@ -1,51 +0,0 @@ -import re - - -SPLIT_RE = re.compile(r'[\.\[\]]+') - - -class JsonSchemaException(ValueError): - """ - Base exception of ``fastjsonschema`` library. - """ - - -class JsonSchemaValueException(JsonSchemaException): - """ - Exception raised by validation function. Available properties: - - * ``message`` containing human-readable information what is wrong (e.g. ``data.property[index] must be smaller than or equal to 42``), - * invalid ``value`` (e.g. ``60``), - * ``name`` of a path in the data structure (e.g. ``data.property[index]``), - * ``path`` as an array in the data structure (e.g. ``['data', 'property', 'index']``), - * the whole ``definition`` which the ``value`` has to fulfil (e.g. ``{'type': 'number', 'maximum': 42}``), - * ``rule`` which the ``value`` is breaking (e.g. ``maximum``) - * and ``rule_definition`` (e.g. ``42``). - - .. versionchanged:: 2.14.0 - Added all extra properties. - """ - - def __init__(self, message, value=None, name=None, definition=None, rule=None): - super().__init__(message) - self.message = message - self.value = value - self.name = name - self.definition = definition - self.rule = rule - - @property - def path(self): - return [item for item in SPLIT_RE.split(self.name) if item != ''] - - @property - def rule_definition(self): - if not self.rule or not self.definition: - return None - return self.definition.get(self.rule) - - -class JsonSchemaDefinitionException(JsonSchemaException): - """ - Exception raised by generator of validation function. - """ diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/README.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/README.md deleted file mode 100644 index 9e2fdf818e60ffce5ee74e1940fa9978b93c4495..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/README.md +++ /dev/null @@ -1,115 +0,0 @@ -## Bottom-Up and Top-Down Attention for Visual Question Answering - -An efficient PyTorch implementation of the winning entry of the [2017 VQA Challenge](http://www.visualqa.org/challenge.html). - -The implementation follows the VQA system described in "Bottom-Up and -Top-Down Attention for Image Captioning and Visual Question Answering" -(https://arxiv.org/abs/1707.07998) and "Tips and Tricks for Visual -Question Answering: Learnings from the 2017 Challenge" -(https://arxiv.org/abs/1708.02711). - -## Results - -| Model | Validation Accuracy | Training Time -| --- | --- | -- | -| Reported Model | 63.15 | 12 - 18 hours (Tesla K40) | -| Implemented Model | **63.58** | 40 - 50 minutes (Titan Xp) | - -The accuracy was calculated using the [VQA evaluation metric](http://www.visualqa.org/evaluation.html). - -## About - -This is part of a project done at CMU for the course 11-777 -Advanced Multimodal Machine Learning and a joint work between Hengyuan Hu, -Alex Xiao, and Henry Huang. - -As part of our project, we implemented bottom up attention as a strong VQA baseline. We were planning to integrate object -detection with VQA and were very glad to see that Peter Anderson and -Damien Teney et al. had already done that beautifully. -We hope this clean and -efficient implementation can serve as a useful baseline for future VQA -explorations. - -## Implementation Details - -Our implementation follows the overall structure of the papers but with -the following simplifications: - -1. We don't use extra data from [Visual Genome](http://visualgenome.org/). -2. We use only a fixed number of objects per image (K=36). -3. We use a simple, single stream classifier without pre-training. -4. We use the simple ReLU activation instead of gated tanh. - -The first two points greatly reduce the training time. Our -implementation takes around 200 seconds per epoch on a single Titan Xp while -the one described in the paper takes 1 hour per epoch. - -The third point is simply because we feel the two stream classifier -and pre-training in the original paper is over-complicated and not -necessary. - -For the non-linear activation unit, we tried gated tanh but couldn't -make it work. We also tried gated linear unit (GLU) and it works better than -ReLU. Eventually we choose ReLU due to its simplicity and since the gain -from using GLU is too small to justify the fact that GLU doubles the -number of parameters. - -With these simplifications we would expect the performance to drop. For -reference, the best result on validation set reported in the paper is -63.15. The reported result without extra data from visual genome is -62.48, the result using only 36 objects per image is 62.82, the result -using two steam classifier but not pre-trained is 62.28 and the result -using ReLU is 61.63. These numbers are cited from the Table 1 of the -paper: "Tips and Tricks for Visual Question Answering: Learnings from -the 2017 Challenge". With all the above simplification aggregated, our -first implementation got around 59-60 on validation set. - -To shrink the gap, we added some simple but powerful -modifications. Including: - -1. Add dropout to alleviate overfitting -2. Double the number of neurons -3. Add weight normalization (BN seems not work well here) -4. Switch to Adamax optimizer -5. Gradient clipping - -These small modifications bring the number back to ~62.80. We further -change the concatenation based attention module in the original paper -to a projection based module. This new attention module is inspired by -the paper "Modeling Relationships in Referential Expressions with -Compositional Modular Networks" -(https://arxiv.org/pdf/1611.09978.pdf), but with some modifications -(implemented in attention.NewAttention). With -the help of this new attention, we boost the performance to ~63.58, -surpassing the reported best result with no extra data and less -computation cost. - -## Usage - -#### Prerequisites - -Make sure you are on a machine with a NVIDIA GPU and Python 2 with about 70 GB disk space. - -1. Install [PyTorch v0.3](http://pytorch.org/) with CUDA and Python 2.7. -2. Install [h5py](http://docs.h5py.org/en/latest/build.html). - -#### Data Setup - -All data should be downloaded to a 'data/' directory in the root -directory of this repository. - -The easiest way to download the data is to run the provided script -`tools/download.sh` from the repository root. The features are -provided by and downloaded from the original authors' -[repo](https://github.com/peteanderson80/bottom-up-attention). If the -script does not work, it should be easy to examine the script and -modify the steps outlined in it according to your needs. Then run -`tools/process.sh` from the repository root to process the data to the -correct format. - -#### Training - -Simply run `python main.py` to start training. The training and -validation scores will be printed every epoch, and the best model will -be saved under the directory "saved_models". The default flags should -give you the result provided in the table above. diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/meta_arch/semantic_seg.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/meta_arch/semantic_seg.py deleted file mode 100644 index 38201fdbf29c5a30b4b24eda9fa0e8a9d3af93a7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/meta_arch/semantic_seg.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import numpy as np -from typing import Dict -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.layers import Conv2d, ShapeSpec -from detectron2.structures import ImageList -from detectron2.utils.registry import Registry - -from ..backbone import build_backbone -from ..postprocessing import sem_seg_postprocess -from .build import META_ARCH_REGISTRY - -__all__ = ["SemanticSegmentor", "SEM_SEG_HEADS_REGISTRY", "SemSegFPNHead", "build_sem_seg_head"] - - -SEM_SEG_HEADS_REGISTRY = Registry("SEM_SEG_HEADS") -SEM_SEG_HEADS_REGISTRY.__doc__ = """ -Registry for semantic segmentation heads, which make semantic segmentation predictions -from feature maps. -""" - - -@META_ARCH_REGISTRY.register() -class SemanticSegmentor(nn.Module): - """ - Main class for semantic segmentation architectures. - """ - - def __init__(self, cfg): - super().__init__() - - self.device = torch.device(cfg.MODEL.DEVICE) - - self.backbone = build_backbone(cfg) - self.sem_seg_head = build_sem_seg_head(cfg, self.backbone.output_shape()) - - pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(self.device).view(-1, 1, 1) - pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to(self.device).view(-1, 1, 1) - self.normalizer = lambda x: (x - pixel_mean) / pixel_std - - self.to(self.device) - - def forward(self, batched_inputs): - """ - Args: - batched_inputs: a list, batched outputs of :class:`DatasetMapper` . - Each item in the list contains the inputs for one image. - - For now, each item in the list is a dict that contains: - - * "image": Tensor, image in (C, H, W) format. - * "sem_seg": semantic segmentation ground truth - * Other information that's included in the original dicts, such as: - "height", "width" (int): the output resolution of the model, used in inference. - See :meth:`postprocess` for details. - - Returns: - list[dict]: - Each dict is the output for one input image. - The dict contains one key "sem_seg" whose value is a - Tensor of the output resolution that represents the - per-pixel segmentation prediction. - """ - images = [x["image"].to(self.device) for x in batched_inputs] - images = [self.normalizer(x) for x in images] - images = ImageList.from_tensors(images, self.backbone.size_divisibility) - - features = self.backbone(images.tensor) - - if "sem_seg" in batched_inputs[0]: - targets = [x["sem_seg"].to(self.device) for x in batched_inputs] - targets = ImageList.from_tensors( - targets, self.backbone.size_divisibility, self.sem_seg_head.ignore_value - ).tensor - else: - targets = None - results, losses = self.sem_seg_head(features, targets) - - if self.training: - return losses - - processed_results = [] - for result, input_per_image, image_size in zip(results, batched_inputs, images.image_sizes): - height = input_per_image.get("height") - width = input_per_image.get("width") - r = sem_seg_postprocess(result, image_size, height, width) - processed_results.append({"sem_seg": r}) - return processed_results - - -def build_sem_seg_head(cfg, input_shape): - """ - Build a semantic segmentation head from `cfg.MODEL.SEM_SEG_HEAD.NAME`. - """ - name = cfg.MODEL.SEM_SEG_HEAD.NAME - return SEM_SEG_HEADS_REGISTRY.get(name)(cfg, input_shape) - - -@SEM_SEG_HEADS_REGISTRY.register() -class SemSegFPNHead(nn.Module): - """ - A semantic segmentation head described in detail in the Panoptic Feature Pyramid Networks paper - (https://arxiv.org/abs/1901.02446). It takes FPN features as input and merges information from - all levels of the FPN into single output. - """ - - def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]): - super().__init__() - - # fmt: off - self.in_features = cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - feature_strides = {k: v.stride for k, v in input_shape.items()} - feature_channels = {k: v.channels for k, v in input_shape.items()} - self.ignore_value = cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE - num_classes = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES - conv_dims = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - self.common_stride = cfg.MODEL.SEM_SEG_HEAD.COMMON_STRIDE - norm = cfg.MODEL.SEM_SEG_HEAD.NORM - self.loss_weight = cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT - # fmt: on - - self.scale_heads = [] - for in_feature in self.in_features: - head_ops = [] - head_length = max( - 1, int(np.log2(feature_strides[in_feature]) - np.log2(self.common_stride)) - ) - for k in range(head_length): - norm_module = nn.GroupNorm(32, conv_dims) if norm == "GN" else None - conv = Conv2d( - feature_channels[in_feature] if k == 0 else conv_dims, - conv_dims, - kernel_size=3, - stride=1, - padding=1, - bias=not norm, - norm=norm_module, - activation=F.relu, - ) - weight_init.c2_msra_fill(conv) - head_ops.append(conv) - if feature_strides[in_feature] != self.common_stride: - head_ops.append( - nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False) - ) - self.scale_heads.append(nn.Sequential(*head_ops)) - self.add_module(in_feature, self.scale_heads[-1]) - self.predictor = Conv2d(conv_dims, num_classes, kernel_size=1, stride=1, padding=0) - weight_init.c2_msra_fill(self.predictor) - - def forward(self, features, targets=None): - """ - Returns: - In training, returns (None, dict of losses) - In inference, returns (predictions, {}) - """ - x = self.layers(features) - if self.training: - return None, self.losses(x, targets) - else: - x = F.interpolate( - x, scale_factor=self.common_stride, mode="bilinear", align_corners=False - ) - return x, {} - - def layers(self, features): - for i, f in enumerate(self.in_features): - if i == 0: - x = self.scale_heads[i](features[f]) - else: - x = x + self.scale_heads[i](features[f]) - x = self.predictor(x) - return x - - def losses(self, predictions, targets): - predictions = F.interpolate( - predictions, scale_factor=self.common_stride, mode="bilinear", align_corners=False - ) - loss = F.cross_entropy( - predictions, targets, reduction="mean", ignore_index=self.ignore_value - ) - losses = {"loss_sem_seg": loss * self.loss_weight} - return losses diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/generate.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/generate.h deleted file mode 100644 index df77901e219d07e76cbc294299445bb0eaad0dfc..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/generate.h +++ /dev/null @@ -1,90 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include -#include - -#include -#include - -namespace thrust -{ -namespace cuda_cub { - -// for_each functor -template -struct generate_f -{ - Generator generator; - - THRUST_FUNCTION - generate_f(Generator generator_) : generator(generator_) {} - - template - THRUST_DEVICE_FUNCTION void operator()(T const& value) - { - T & lvalue = const_cast(value); - lvalue = generator(); - } -}; - -// for_each_n -template -OutputIt __host__ __device__ -generate_n(execution_policy &policy, - OutputIt result, - Size count, - Generator generator) -{ - return cuda_cub::for_each_n(policy, - result, - count, - generate_f(generator)); -} - - // for_each -template -void __host__ __device__ -generate(execution_policy &policy, - OutputIt first, - OutputIt last, - Generator generator) -{ - cuda_cub::generate_n(policy, first, thrust::distance(first, last), generator); -} - -} // namespace cuda_cub -} // end namespace thrust -#endif diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/scan_by_key.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/scan_by_key.h deleted file mode 100644 index 1e0471b37458b8aa861a0eb1ef69457b76572657..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/scan_by_key.h +++ /dev/null @@ -1,150 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file scan_by_key.h - * \brief Sequential implementation of scan_by_key functions. - */ - -#pragma once - -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -__thrust_exec_check_disable__ -template -__host__ __device__ - OutputIterator inclusive_scan_by_key(sequential::execution_policy &, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - OutputIterator result, - BinaryPredicate binary_pred, - BinaryFunction binary_op) -{ - typedef typename thrust::iterator_traits::value_type KeyType; - typedef typename thrust::iterator_traits::value_type ValueType; - - // wrap binary_op - thrust::detail::wrapped_function< - BinaryFunction, - ValueType - > wrapped_binary_op(binary_op); - - if(first1 != last1) - { - KeyType prev_key = *first1; - ValueType prev_value = *first2; - - *result = prev_value; - - for(++first1, ++first2, ++result; - first1 != last1; - ++first1, ++first2, ++result) - { - KeyType key = *first1; - - if(binary_pred(prev_key, key)) - *result = prev_value = wrapped_binary_op(prev_value,*first2); - else - *result = prev_value = *first2; - - prev_key = key; - } - } - - return result; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ - OutputIterator exclusive_scan_by_key(sequential::execution_policy &, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - OutputIterator result, - T init, - BinaryPredicate binary_pred, - BinaryFunction binary_op) -{ - typedef typename thrust::iterator_traits::value_type KeyType; - typedef typename thrust::iterator_traits::value_type ValueType; - - if(first1 != last1) - { - KeyType temp_key = *first1; - ValueType temp_value = *first2; - - ValueType next = init; - - // first one is init - *result = next; - - next = binary_op(next, temp_value); - - for(++first1, ++first2, ++result; - first1 != last1; - ++first1, ++first2, ++result) - { - KeyType key = *first1; - - // use temp to permit in-place scans - temp_value = *first2; - - if (!binary_pred(temp_key, key)) - next = init; // reset sum - - *result = next; - next = binary_op(next, temp_value); - - temp_key = key; - } - } - - return result; -} - - -} // end namespace sequential -} // end namespace detail -} // end namespace system -} // end namespace thrust - diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/data/datasets/__init__.py deleted file mode 100644 index dbd92e8e2e1295d73e28f1eb2ed2368f368849a3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .coco import load_coco_json, load_sem_seg, register_coco_instances -from .coco_panoptic import register_coco_panoptic, register_coco_panoptic_separated -from .lvis import load_lvis_json, register_lvis_instances, get_lvis_instances_meta -from .pascal_voc import load_voc_instances, register_pascal_voc -from . import builtin as _builtin # ensure the builtin datasets are registered - - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/CVPR/regionclip-demo/detectron2/model_zoo/model_zoo.py b/spaces/CVPR/regionclip-demo/detectron2/model_zoo/model_zoo.py deleted file mode 100644 index 1ca234e0ec1f97be3e6cf761409a876bf2f05caf..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/model_zoo/model_zoo.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os -from typing import Optional -import pkg_resources -import torch - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import CfgNode, LazyConfig, get_cfg, instantiate -from detectron2.modeling import build_model - - -class _ModelZooUrls(object): - """ - Mapping from names to officially released Detectron2 pre-trained models. - """ - - S3_PREFIX = "https://dl.fbaipublicfiles.com/detectron2/" - - # format: {config_path.yaml} -> model_id/model_final_{commit}.pkl - CONFIG_PATH_TO_URL_SUFFIX = { - # COCO Detection with Faster R-CNN - "COCO-Detection/faster_rcnn_R_50_C4_1x": "137257644/model_final_721ade.pkl", - "COCO-Detection/faster_rcnn_R_50_DC5_1x": "137847829/model_final_51d356.pkl", - "COCO-Detection/faster_rcnn_R_50_FPN_1x": "137257794/model_final_b275ba.pkl", - "COCO-Detection/faster_rcnn_R_50_C4_3x": "137849393/model_final_f97cb7.pkl", - "COCO-Detection/faster_rcnn_R_50_DC5_3x": "137849425/model_final_68d202.pkl", - "COCO-Detection/faster_rcnn_R_50_FPN_3x": "137849458/model_final_280758.pkl", - "COCO-Detection/faster_rcnn_R_101_C4_3x": "138204752/model_final_298dad.pkl", - "COCO-Detection/faster_rcnn_R_101_DC5_3x": "138204841/model_final_3e0943.pkl", - "COCO-Detection/faster_rcnn_R_101_FPN_3x": "137851257/model_final_f6e8b1.pkl", - "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x": "139173657/model_final_68b088.pkl", - # COCO Detection with RetinaNet - "COCO-Detection/retinanet_R_50_FPN_1x": "190397773/model_final_bfca0b.pkl", - "COCO-Detection/retinanet_R_50_FPN_3x": "190397829/model_final_5bd44e.pkl", - "COCO-Detection/retinanet_R_101_FPN_3x": "190397697/model_final_971ab9.pkl", - # COCO Detection with RPN and Fast R-CNN - "COCO-Detection/rpn_R_50_C4_1x": "137258005/model_final_450694.pkl", - "COCO-Detection/rpn_R_50_FPN_1x": "137258492/model_final_02ce48.pkl", - "COCO-Detection/fast_rcnn_R_50_FPN_1x": "137635226/model_final_e5f7ce.pkl", - # COCO Instance Segmentation Baselines with Mask R-CNN - "COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x": "137259246/model_final_9243eb.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_1x": "137260150/model_final_4f86c3.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x": "137260431/model_final_a54504.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x": "137849525/model_final_4ce675.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_3x": "137849551/model_final_84107b.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x": "137849600/model_final_f10217.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_101_C4_3x": "138363239/model_final_a2914c.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_101_DC5_3x": "138363294/model_final_0464b7.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x": "138205316/model_final_a3ec72.pkl", - "COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x": "139653917/model_final_2d9806.pkl", # noqa - # COCO Person Keypoint Detection Baselines with Keypoint R-CNN - "COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x": "137261548/model_final_04e291.pkl", - "COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x": "137849621/model_final_a6e10b.pkl", - "COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x": "138363331/model_final_997cc7.pkl", - "COCO-Keypoints/keypoint_rcnn_X_101_32x8d_FPN_3x": "139686956/model_final_5ad38f.pkl", - # COCO Panoptic Segmentation Baselines with Panoptic FPN - "COCO-PanopticSegmentation/panoptic_fpn_R_50_1x": "139514544/model_final_dbfeb4.pkl", - "COCO-PanopticSegmentation/panoptic_fpn_R_50_3x": "139514569/model_final_c10459.pkl", - "COCO-PanopticSegmentation/panoptic_fpn_R_101_3x": "139514519/model_final_cafdb1.pkl", - # LVIS Instance Segmentation Baselines with Mask R-CNN - "LVISv0.5-InstanceSegmentation/mask_rcnn_R_50_FPN_1x": "144219072/model_final_571f7c.pkl", # noqa - "LVISv0.5-InstanceSegmentation/mask_rcnn_R_101_FPN_1x": "144219035/model_final_824ab5.pkl", # noqa - "LVISv0.5-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_1x": "144219108/model_final_5e3439.pkl", # noqa - # Cityscapes & Pascal VOC Baselines - "Cityscapes/mask_rcnn_R_50_FPN": "142423278/model_final_af9cf5.pkl", - "PascalVOC-Detection/faster_rcnn_R_50_C4": "142202221/model_final_b1acc2.pkl", - # Other Settings - "Misc/mask_rcnn_R_50_FPN_1x_dconv_c3-c5": "138602867/model_final_65c703.pkl", - "Misc/mask_rcnn_R_50_FPN_3x_dconv_c3-c5": "144998336/model_final_821d0b.pkl", - "Misc/cascade_mask_rcnn_R_50_FPN_1x": "138602847/model_final_e9d89b.pkl", - "Misc/cascade_mask_rcnn_R_50_FPN_3x": "144998488/model_final_480dd8.pkl", - "Misc/mask_rcnn_R_50_FPN_3x_syncbn": "169527823/model_final_3b3c51.pkl", - "Misc/mask_rcnn_R_50_FPN_3x_gn": "138602888/model_final_dc5d9e.pkl", - "Misc/scratch_mask_rcnn_R_50_FPN_3x_gn": "138602908/model_final_01ca85.pkl", - "Misc/scratch_mask_rcnn_R_50_FPN_9x_gn": "183808979/model_final_da7b4c.pkl", - "Misc/scratch_mask_rcnn_R_50_FPN_9x_syncbn": "184226666/model_final_5ce33e.pkl", - "Misc/panoptic_fpn_R_101_dconv_cascade_gn_3x": "139797668/model_final_be35db.pkl", - "Misc/cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv": "18131413/model_0039999_e76410.pkl", # noqa - # D1 Comparisons - "Detectron1-Comparisons/faster_rcnn_R_50_FPN_noaug_1x": "137781054/model_final_7ab50c.pkl", # noqa - "Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x": "137781281/model_final_62ca52.pkl", # noqa - "Detectron1-Comparisons/keypoint_rcnn_R_50_FPN_1x": "137781195/model_final_cce136.pkl", - } - - @staticmethod - def query(config_path: str) -> Optional[str]: - """ - Args: - config_path: relative config filename - """ - name = config_path.replace(".yaml", "").replace(".py", "") - if name in _ModelZooUrls.CONFIG_PATH_TO_URL_SUFFIX: - suffix = _ModelZooUrls.CONFIG_PATH_TO_URL_SUFFIX[name] - return _ModelZooUrls.S3_PREFIX + name + "/" + suffix - return None - - -def get_checkpoint_url(config_path): - """ - Returns the URL to the model trained using the given config - - Args: - config_path (str): config file name relative to detectron2's "configs/" - directory, e.g., "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml" - - Returns: - str: a URL to the model - """ - url = _ModelZooUrls.query(config_path) - if url is None: - raise RuntimeError("Pretrained model for {} is not available!".format(config_path)) - return url - - -def get_config_file(config_path): - """ - Returns path to a builtin config file. - - Args: - config_path (str): config file name relative to detectron2's "configs/" - directory, e.g., "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml" - - Returns: - str: the real path to the config file. - """ - cfg_file = pkg_resources.resource_filename( - "detectron2.model_zoo", os.path.join("configs", config_path) - ) - if not os.path.exists(cfg_file): - raise RuntimeError("{} not available in Model Zoo!".format(config_path)) - return cfg_file - - -def get_config(config_path, trained: bool = False): - """ - Returns a config object for a model in model zoo. - - Args: - config_path (str): config file name relative to detectron2's "configs/" - directory, e.g., "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml" - trained (bool): If True, will set ``MODEL.WEIGHTS`` to trained model zoo weights. - If False, the checkpoint specified in the config file's ``MODEL.WEIGHTS`` is used - instead; this will typically (though not always) initialize a subset of weights using - an ImageNet pre-trained model, while randomly initializing the other weights. - - Returns: - CfgNode or omegaconf.DictConfig: a config object - """ - cfg_file = get_config_file(config_path) - if cfg_file.endswith(".yaml"): - cfg = get_cfg() - cfg.merge_from_file(cfg_file) - if trained: - cfg.MODEL.WEIGHTS = get_checkpoint_url(config_path) - return cfg - elif cfg_file.endswith(".py"): - cfg = LazyConfig.load(cfg_file) - if trained: - url = get_checkpoint_url(config_path) - if "train" in cfg and "init_checkpoint" in cfg.train: - cfg.train.init_checkpoint = url - else: - raise NotImplementedError - return cfg - - -def get(config_path, trained: bool = False, device: Optional[str] = None): - """ - Get a model specified by relative path under Detectron2's official ``configs/`` directory. - - Args: - config_path (str): config file name relative to detectron2's "configs/" - directory, e.g., "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml" - trained (bool): see :func:`get_config`. - device (str or None): overwrite the device in config, if given. - - Returns: - nn.Module: a detectron2 model. Will be in training mode. - - Example: - :: - from detectron2 import model_zoo - model = model_zoo.get("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml", trained=True) - """ - cfg = get_config(config_path, trained) - if device is None and not torch.cuda.is_available(): - device = "cpu" - if device is not None and isinstance(cfg, CfgNode): - cfg.MODEL.DEVICE = device - - if isinstance(cfg, CfgNode): - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - else: - model = instantiate(cfg.model) - if device is not None: - model = model.to(device) - if "train" in cfg and "init_checkpoint" in cfg.train: - DetectionCheckpointer(model).load(cfg.train.init_checkpoint) - return model diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/version.py b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/version.py deleted file mode 100644 index b794fd409a5e3b3b65ad76a43d6a01a318877640..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.1.0' diff --git a/spaces/Clebersla/RVC_V2_Huggingface_Version/utils.py b/spaces/Clebersla/RVC_V2_Huggingface_Version/utils.py deleted file mode 100644 index 62be8d03a8e8b839f8747310ef0ec0e82fb8ff0a..0000000000000000000000000000000000000000 --- a/spaces/Clebersla/RVC_V2_Huggingface_Version/utils.py +++ /dev/null @@ -1,151 +0,0 @@ -import ffmpeg -import numpy as np - -# import praatio -# import praatio.praat_scripts -import os -import sys - -import random - -import csv - -platform_stft_mapping = { - "linux": "stftpitchshift", - "darwin": "stftpitchshift", - "win32": "stftpitchshift.exe", -} - -stft = platform_stft_mapping.get(sys.platform) -# praatEXE = join('.',os.path.abspath(os.getcwd()) + r"\Praat.exe") - - -def CSVutil(file, rw, type, *args): - if type == "formanting": - if rw == "r": - with open(file) as fileCSVread: - csv_reader = list(csv.reader(fileCSVread)) - return ( - (csv_reader[0][0], csv_reader[0][1], csv_reader[0][2]) - if csv_reader is not None - else (lambda: exec('raise ValueError("No data")'))() - ) - else: - if args: - doformnt = args[0] - else: - doformnt = False - qfr = args[1] if len(args) > 1 else 1.0 - tmb = args[2] if len(args) > 2 else 1.0 - with open(file, rw, newline="") as fileCSVwrite: - csv_writer = csv.writer(fileCSVwrite, delimiter=",") - csv_writer.writerow([doformnt, qfr, tmb]) - elif type == "stop": - stop = args[0] if args else False - with open(file, rw, newline="") as fileCSVwrite: - csv_writer = csv.writer(fileCSVwrite, delimiter=",") - csv_writer.writerow([stop]) - - -def load_audio(file, sr, DoFormant, Quefrency, Timbre): - converted = False - DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting") - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n") - - if ( - lambda DoFormant: True - if DoFormant.lower() == "true" - else (False if DoFormant.lower() == "false" else DoFormant) - )(DoFormant): - numerator = round(random.uniform(1, 4), 4) - # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}") - # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted)) - - if not file.endswith(".wav"): - if not os.path.isfile(f"{file_formanted}.wav"): - converted = True - # print(f"\nfile = {file}\n") - # print(f"\nfile_formanted = {file_formanted}\n") - converting = ( - ffmpeg.input(file_formanted, threads=0) - .output(f"{file_formanted}.wav") - .run( - cmd=["ffmpeg", "-nostdin"], - capture_stdout=True, - capture_stderr=True, - ) - ) - else: - pass - - file_formanted = ( - f"{file_formanted}.wav" - if not file_formanted.endswith(".wav") - else file_formanted - ) - - print(f" · Formanting {file_formanted}...\n") - - os.system( - '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"' - % ( - stft, - file_formanted, - Quefrency, - Timbre, - file_formanted, - str(numerator), - ) - ) - - print(f" · Formanted {file_formanted}!\n") - - # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\') - # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\') - # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - - out, _ = ( - ffmpeg.input( - "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0 - ) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - - try: - os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - except Exception: - pass - print("couldn't remove formanted type of file") - - else: - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - if converted: - try: - os.remove(file_formanted) - except Exception: - pass - print("couldn't remove converted type of file") - converted = False - - return np.frombuffer(out, np.float32).flatten() diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Weuseing.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/Weuseing.py deleted file mode 100644 index ba79e8b9c2573418720495a20d4c1c8d5a6ca7e9..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Weuseing.py +++ /dev/null @@ -1,29 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://api.gptplus.one' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - 'Accept': '*/*', - 'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4', - } - data = { - 'messages': messages, - 'model': model, - } - response = requests.post('https://api.gptplus.one/chat-process', json=data, stream=True) - print(response) - - for token in response.iter_content(chunk_size=None): - yield (token.decode('utf-8')) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/CormacMc/projectsub6/app.py b/spaces/CormacMc/projectsub6/app.py deleted file mode 100644 index ad4f9057b54b250aee5fcfc52ff5f8346ca64d1a..0000000000000000000000000000000000000000 --- a/spaces/CormacMc/projectsub6/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return f"Hello {name}!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/README.md b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/README.md deleted file mode 100644 index 3a3c7c63534bc9e663b2bb9ba0062aa314328f95..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ControlNetMediaPipeFace -emoji: 👁 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/boundary_head/loss.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/boundary_head/loss.py deleted file mode 100644 index 00b659e1abde19746ef13aae30fa3bb2f298a57c..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/boundary_head/loss.py +++ /dev/null @@ -1,259 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch -from torch.nn import functional as F - -from maskrcnn_benchmark.layers import smooth_l1_loss -from maskrcnn_benchmark.modeling.matcher import Matcher -from maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou -from maskrcnn_benchmark.modeling.utils import cat - -from maskrcnn_benchmark.modeling.balanced_positive_negative_sampler import ( - BalancedPositiveNegativeSampler -) -# import torch import torch.nn as nn -from maskrcnn_benchmark.structures.ke import kes_to_heat_map -import numpy as np -import os, time -import cv2 -DEBUG = 0 - -from scipy.ndimage.morphology import distance_transform_edt - - -def onehot_to_binary_edges(mask, radius): - """ - Converts a segmentation mask (K,H,W) to a binary edgemap (1,H,W) - """ - if radius < 0: - return mask - - # We need to pad the borders for boundary conditions - - mask = np.pad(mask, ((1, 1), (1, 1)), mode='constant', constant_values=0) - mask = distance_transform_edt(mask) - mask = mask[1:-1, 1:-1] - mask[mask > radius] = 0 - mask = (mask > 0).astype(np.uint8) - return mask - - -def project_masks_on_boxes(segmentation_masks, proposals, discretization_size): - """ - Given segmentation masks and the bounding boxes corresponding - to the location of the masks in the image, this function - crops and resizes the masks in the position defined by the - boxes. This prepares the masks for them to be fed to the - loss computation as the targets. - - Arguments: - segmentation_masks: an instance of SegmentationMask - proposals: an instance of BoxList - """ - masks = [] - M = discretization_size - device = proposals.bbox.device - proposals = proposals.convert("xyxy") - assert segmentation_masks.size == proposals.size, "{}, {}".format( - segmentation_masks, proposals - ) - - # FIXME: CPU computation bottleneck, this should be parallelized - proposals = proposals.bbox.to(torch.device("cpu")) - for segmentation_mask, proposal in zip(segmentation_masks, proposals): - # crop the masks, resize them to the desired resolution and - # then convert them to the tensor representation. - cropped_mask = segmentation_mask.crop(proposal) - scaled_mask = cropped_mask.resize((M, M)) - mask = scaled_mask.get_mask_tensor() - mask = mask.numpy().astype(np.uint8) - mask = onehot_to_binary_edges(mask, 2) - mask = torch.from_numpy(mask) - masks.append(mask) - if len(masks) == 0: - return torch.empty(0, dtype=torch.float32, device=device) - return torch.stack(masks, dim=0).to(device, dtype=torch.float32) - - -def project_kes_to_heatmap(kes, mty, proposals, discretization_size): - proposals = proposals.convert('xyxy') - out_x, out_y, valid_x, valid_y, out_mty, valid_mty = kes_to_heat_map(kes.kes_x, kes.kes_y, mty.mty, proposals.bbox, discretization_size) - return out_x, out_y, valid_x, valid_y, out_mty, valid_mty - -def _within_box(points_x, points_y, boxes): - """Validate which kes are contained inside a given box. - points: NxKx2 - boxes: Nx4 - output: NxK - """ - x_within = (points_x[..., :, 0] >= boxes[:, 0, None]) & (points_x[..., :, 0] <= boxes[:, 2, None]) - y_within = (points_y[..., :, 0] >= boxes[:, 1, None]) & (points_y[..., :, 0] <= boxes[:, 3, None]) - return x_within & y_within - -_TOTAL_SKIPPED = 0 - -def balance_ce_loss(pre_mk, target_mk): - pre_mk = torch.sigmoid(pre_mk) - - pos_inds = target_mk.eq(1) - pos_num = torch.sum(pos_inds).float() - neg_num = torch.sum(1 - pos_inds).float() - loss = -(target_mk * torch.log(pre_mk + 1e-4)) / pos_num - ((1 - target_mk) * torch.log(1 - pre_mk + 1e-4)) / neg_num - return loss.sum() - - -def edge_loss(input, target): - n, c, h, w = input.size() - - log_p = input.transpose(1, 2).transpose(2, 3).contiguous().view(1, -1) - target_t = target.transpose(1, 2).transpose(2, 3).contiguous().view(1, -1) - pos_index = (target_t == 1) - neg_index = (target_t == 0) - pos_index = pos_index.data.cpu().numpy().astype(bool) - neg_index = neg_index.data.cpu().numpy().astype(bool) - weight = torch.Tensor(log_p.size()).fill_(0) - weight = weight.numpy() - pos_num = pos_index.sum() - neg_num = neg_index.sum() - sum_num = pos_num + neg_num - weight[pos_index] = neg_num * 1.0 / sum_num - weight[neg_index] = pos_num * 1.0 / sum_num - weight = torch.from_numpy(weight) - weight = weight.cuda() - loss = F.binary_cross_entropy_with_logits(log_p, target_t, weight, size_average=True) - # del pos_index, neg_index - # del weight - return loss - -class BORCNNLossComputation(object): - def __init__(self, proposal_matcher, fg_bg_sampler, discretization_size, cfg): - """ - Arguments: - proposal_matcher (Matcher) - discretization_size (int) - """ - self.proposal_matcher = proposal_matcher - self.fg_bg_sampler = fg_bg_sampler - self.discretization_size = discretization_size - self.cfg = cfg.clone() - - def match_targets_to_proposals(self, proposal, target): - match_quality_matrix = boxlist_iou(target, proposal) - matched_idxs = self.proposal_matcher(match_quality_matrix) - target = target.copy_with_fields(["labels", "masks"]) - matched_targets = target[matched_idxs.clamp(min=0)] - matched_targets.add_field("matched_idxs", matched_idxs) - return matched_targets - - def prepare_targets(self, proposals, targets): - labels = [] - masks = [] - for proposals_per_image, targets_per_image in zip(proposals, targets): - matched_targets = self.match_targets_to_proposals( - proposals_per_image, targets_per_image - ) - matched_idxs = matched_targets.get_field("matched_idxs") - - labels_per_image = matched_targets.get_field("labels") - labels_per_image = labels_per_image.to(dtype=torch.int64) - - # this can probably be removed, but is left here for clarity - # and completeness - neg_inds = matched_idxs == Matcher.BELOW_LOW_THRESHOLD - labels_per_image[neg_inds] = 0 - - # mask scores are only computed on positive samples - positive_inds = torch.nonzero(labels_per_image > 0).squeeze(1) - - segmentation_masks = matched_targets.get_field("masks") - segmentation_masks = segmentation_masks[positive_inds] - - positive_proposals = proposals_per_image[positive_inds] - - masks_per_image = project_masks_on_boxes( - segmentation_masks, positive_proposals, self.discretization_size - ) - - labels.append(labels_per_image) - masks.append(masks_per_image) - - return labels, masks - - def subsample(self, proposals, targets): - """ - This method performs the positive/negative sampling, and return - the sampled proposals. - Note: this function keeps a state. - - Arguments: - proposals (list[BoxList]) - targets (list[BoxList]) - """ - - labels, kes, mty = self.prepare_targets(proposals, targets) - sampled_pos_inds, sampled_neg_inds = self.fg_bg_sampler(labels) - - proposals = list(proposals) - # add corresponding label and regression_targets information to the bounding boxes - for labels_per_image, kes_per_image, mty_per_image, proposals_per_image in zip( - labels, kes, mty, proposals - ): - proposals_per_image.add_field("labels", labels_per_image) - proposals_per_image.add_field("kes", kes_per_image) - proposals_per_image.add_field("mty", mty_per_image) - - # distributed sampled proposals, that were obtained on all feature maps - # concatenated via the fg_bg_sampler, into individual feature map levels - for img_idx, (pos_inds_img, neg_inds_img) in enumerate( - zip(sampled_pos_inds, sampled_neg_inds) - ): - # img_sampled_inds = torch.nonzero(pos_inds_img | neg_inds_img).squeeze(1) - img_sampled_inds = torch.nonzero(pos_inds_img).squeeze(1) - proposals_per_image = proposals[img_idx][img_sampled_inds] - proposals[img_idx] = proposals_per_image - - self._proposals = proposals - return proposals - - def __call__(self, proposals, ke_logits_x, ke_logits_y, targets): - """ - Arguments: - proposals (list[BoxList]) - mask_logits (Tensor) - targets (list[BoxList]) - - Return: - mask_loss (Tensor): scalar tensor containing the loss - """ - labels, mask_targets = self.prepare_targets(proposals, targets) - - labels = cat(labels, dim=0) - mask_targets = cat(mask_targets, dim=0) - positive_inds = torch.nonzero(labels > 0).squeeze(1) - - if mask_targets.numel() == 0: - return 0 - - sb, sh, sw = mask_targets.shape - mask_loss_x = edge_loss( ke_logits_x[positive_inds, 0].view([sb, 1, sh, sw]), mask_targets.view([sb, 1, sh, sw])) - mask_loss_y = edge_loss( ke_logits_y[positive_inds, 0].view([sb, 1, sh, sw]), mask_targets.view([sb, 1, sh, sw])) - - mask_loss = mask_loss_x + mask_loss_y - - return mask_loss , mask_loss_x, mask_loss_y - -def make_roi_boundary_loss_evaluator(cfg): - matcher = Matcher( - cfg.MODEL.ROI_HEADS.FG_IOU_THRESHOLD, - cfg.MODEL.ROI_HEADS.BG_IOU_THRESHOLD, - allow_low_quality_matches=False, - ) - - fg_bg_sampler = BalancedPositiveNegativeSampler( - cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE, cfg.MODEL.ROI_HEADS.POSITIVE_FRACTION - ) - - loss_evaluator = BORCNNLossComputation( - matcher, fg_bg_sampler, cfg.MODEL.ROI_BOUNDARY_HEAD.RESOLUTION, cfg - ) - - return loss_evaluator diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_resources.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_resources.py deleted file mode 100644 index b9a5344aef2962670f9b305a02cd0b11f2087d2f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_resources.py +++ /dev/null @@ -1,18 +0,0 @@ -from __future__ import annotations - -from ..abc import AsyncResource -from ._tasks import CancelScope - - -async def aclose_forcefully(resource: AsyncResource) -> None: - """ - Close an asynchronous resource in a cancelled scope. - - Doing this closes the resource without waiting on anything. - - :param resource: the resource to close - - """ - with CancelScope() as scope: - scope.cancel() - await resource.aclose() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/E_B_L_C_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/E_B_L_C_.py deleted file mode 100644 index 9cc60ff82d23d9348dc956b8c4f44139226e4de6..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/E_B_L_C_.py +++ /dev/null @@ -1,717 +0,0 @@ -from fontTools.misc import sstruct -from . import DefaultTable -from fontTools.misc.textTools import bytesjoin, safeEval -from .BitmapGlyphMetrics import ( - BigGlyphMetrics, - bigGlyphMetricsFormat, - SmallGlyphMetrics, - smallGlyphMetricsFormat, -) -import struct -import itertools -from collections import deque -import logging - - -log = logging.getLogger(__name__) - -eblcHeaderFormat = """ - > # big endian - version: 16.16F - numSizes: I -""" -# The table format string is split to handle sbitLineMetrics simply. -bitmapSizeTableFormatPart1 = """ - > # big endian - indexSubTableArrayOffset: I - indexTablesSize: I - numberOfIndexSubTables: I - colorRef: I -""" -# The compound type for hori and vert. -sbitLineMetricsFormat = """ - > # big endian - ascender: b - descender: b - widthMax: B - caretSlopeNumerator: b - caretSlopeDenominator: b - caretOffset: b - minOriginSB: b - minAdvanceSB: b - maxBeforeBL: b - minAfterBL: b - pad1: b - pad2: b -""" -# hori and vert go between the two parts. -bitmapSizeTableFormatPart2 = """ - > # big endian - startGlyphIndex: H - endGlyphIndex: H - ppemX: B - ppemY: B - bitDepth: B - flags: b -""" - -indexSubTableArrayFormat = ">HHL" -indexSubTableArraySize = struct.calcsize(indexSubTableArrayFormat) - -indexSubHeaderFormat = ">HHL" -indexSubHeaderSize = struct.calcsize(indexSubHeaderFormat) - -codeOffsetPairFormat = ">HH" -codeOffsetPairSize = struct.calcsize(codeOffsetPairFormat) - - -class table_E_B_L_C_(DefaultTable.DefaultTable): - - dependencies = ["EBDT"] - - # This method can be overridden in subclasses to support new formats - # without changing the other implementation. Also can be used as a - # convenience method for coverting a font file to an alternative format. - def getIndexFormatClass(self, indexFormat): - return eblc_sub_table_classes[indexFormat] - - def decompile(self, data, ttFont): - - # Save the original data because offsets are from the start of the table. - origData = data - i = 0 - - dummy = sstruct.unpack(eblcHeaderFormat, data[:8], self) - i += 8 - - self.strikes = [] - for curStrikeIndex in range(self.numSizes): - curStrike = Strike() - self.strikes.append(curStrike) - curTable = curStrike.bitmapSizeTable - dummy = sstruct.unpack2( - bitmapSizeTableFormatPart1, data[i : i + 16], curTable - ) - i += 16 - for metric in ("hori", "vert"): - metricObj = SbitLineMetrics() - vars(curTable)[metric] = metricObj - dummy = sstruct.unpack2( - sbitLineMetricsFormat, data[i : i + 12], metricObj - ) - i += 12 - dummy = sstruct.unpack( - bitmapSizeTableFormatPart2, data[i : i + 8], curTable - ) - i += 8 - - for curStrike in self.strikes: - curTable = curStrike.bitmapSizeTable - for subtableIndex in range(curTable.numberOfIndexSubTables): - i = ( - curTable.indexSubTableArrayOffset - + subtableIndex * indexSubTableArraySize - ) - - tup = struct.unpack( - indexSubTableArrayFormat, data[i : i + indexSubTableArraySize] - ) - (firstGlyphIndex, lastGlyphIndex, additionalOffsetToIndexSubtable) = tup - i = curTable.indexSubTableArrayOffset + additionalOffsetToIndexSubtable - - tup = struct.unpack( - indexSubHeaderFormat, data[i : i + indexSubHeaderSize] - ) - (indexFormat, imageFormat, imageDataOffset) = tup - - indexFormatClass = self.getIndexFormatClass(indexFormat) - indexSubTable = indexFormatClass(data[i + indexSubHeaderSize :], ttFont) - indexSubTable.firstGlyphIndex = firstGlyphIndex - indexSubTable.lastGlyphIndex = lastGlyphIndex - indexSubTable.additionalOffsetToIndexSubtable = ( - additionalOffsetToIndexSubtable - ) - indexSubTable.indexFormat = indexFormat - indexSubTable.imageFormat = imageFormat - indexSubTable.imageDataOffset = imageDataOffset - indexSubTable.decompile() # https://github.com/fonttools/fonttools/issues/317 - curStrike.indexSubTables.append(indexSubTable) - - def compile(self, ttFont): - - dataList = [] - self.numSizes = len(self.strikes) - dataList.append(sstruct.pack(eblcHeaderFormat, self)) - - # Data size of the header + bitmapSizeTable needs to be calculated - # in order to form offsets. This value will hold the size of the data - # in dataList after all the data is consolidated in dataList. - dataSize = len(dataList[0]) - - # The table will be structured in the following order: - # (0) header - # (1) Each bitmapSizeTable [1 ... self.numSizes] - # (2) Alternate between indexSubTableArray and indexSubTable - # for each bitmapSizeTable present. - # - # The issue is maintaining the proper offsets when table information - # gets moved around. All offsets and size information must be recalculated - # when building the table to allow editing within ttLib and also allow easy - # import/export to and from XML. All of this offset information is lost - # when exporting to XML so everything must be calculated fresh so importing - # from XML will work cleanly. Only byte offset and size information is - # calculated fresh. Count information like numberOfIndexSubTables is - # checked through assertions. If the information in this table was not - # touched or was changed properly then these types of values should match. - # - # The table will be rebuilt the following way: - # (0) Precompute the size of all the bitmapSizeTables. This is needed to - # compute the offsets properly. - # (1) For each bitmapSizeTable compute the indexSubTable and - # indexSubTableArray pair. The indexSubTable must be computed first - # so that the offset information in indexSubTableArray can be - # calculated. Update the data size after each pairing. - # (2) Build each bitmapSizeTable. - # (3) Consolidate all the data into the main dataList in the correct order. - - for _ in self.strikes: - dataSize += sstruct.calcsize(bitmapSizeTableFormatPart1) - dataSize += len(("hori", "vert")) * sstruct.calcsize(sbitLineMetricsFormat) - dataSize += sstruct.calcsize(bitmapSizeTableFormatPart2) - - indexSubTablePairDataList = [] - for curStrike in self.strikes: - curTable = curStrike.bitmapSizeTable - curTable.numberOfIndexSubTables = len(curStrike.indexSubTables) - curTable.indexSubTableArrayOffset = dataSize - - # Precompute the size of the indexSubTableArray. This information - # is important for correctly calculating the new value for - # additionalOffsetToIndexSubtable. - sizeOfSubTableArray = ( - curTable.numberOfIndexSubTables * indexSubTableArraySize - ) - lowerBound = dataSize - dataSize += sizeOfSubTableArray - upperBound = dataSize - - indexSubTableDataList = [] - for indexSubTable in curStrike.indexSubTables: - indexSubTable.additionalOffsetToIndexSubtable = ( - dataSize - curTable.indexSubTableArrayOffset - ) - glyphIds = list(map(ttFont.getGlyphID, indexSubTable.names)) - indexSubTable.firstGlyphIndex = min(glyphIds) - indexSubTable.lastGlyphIndex = max(glyphIds) - data = indexSubTable.compile(ttFont) - indexSubTableDataList.append(data) - dataSize += len(data) - curTable.startGlyphIndex = min( - ist.firstGlyphIndex for ist in curStrike.indexSubTables - ) - curTable.endGlyphIndex = max( - ist.lastGlyphIndex for ist in curStrike.indexSubTables - ) - - for i in curStrike.indexSubTables: - data = struct.pack( - indexSubHeaderFormat, - i.firstGlyphIndex, - i.lastGlyphIndex, - i.additionalOffsetToIndexSubtable, - ) - indexSubTablePairDataList.append(data) - indexSubTablePairDataList.extend(indexSubTableDataList) - curTable.indexTablesSize = dataSize - curTable.indexSubTableArrayOffset - - for curStrike in self.strikes: - curTable = curStrike.bitmapSizeTable - data = sstruct.pack(bitmapSizeTableFormatPart1, curTable) - dataList.append(data) - for metric in ("hori", "vert"): - metricObj = vars(curTable)[metric] - data = sstruct.pack(sbitLineMetricsFormat, metricObj) - dataList.append(data) - data = sstruct.pack(bitmapSizeTableFormatPart2, curTable) - dataList.append(data) - dataList.extend(indexSubTablePairDataList) - - return bytesjoin(dataList) - - def toXML(self, writer, ttFont): - writer.simpletag("header", [("version", self.version)]) - writer.newline() - for curIndex, curStrike in enumerate(self.strikes): - curStrike.toXML(curIndex, writer, ttFont) - - def fromXML(self, name, attrs, content, ttFont): - if name == "header": - self.version = safeEval(attrs["version"]) - elif name == "strike": - if not hasattr(self, "strikes"): - self.strikes = [] - strikeIndex = safeEval(attrs["index"]) - curStrike = Strike() - curStrike.fromXML(name, attrs, content, ttFont, self) - - # Grow the strike array to the appropriate size. The XML format - # allows for the strike index value to be out of order. - if strikeIndex >= len(self.strikes): - self.strikes += [None] * (strikeIndex + 1 - len(self.strikes)) - assert self.strikes[strikeIndex] is None, "Duplicate strike EBLC indices." - self.strikes[strikeIndex] = curStrike - - -class Strike(object): - def __init__(self): - self.bitmapSizeTable = BitmapSizeTable() - self.indexSubTables = [] - - def toXML(self, strikeIndex, writer, ttFont): - writer.begintag("strike", [("index", strikeIndex)]) - writer.newline() - self.bitmapSizeTable.toXML(writer, ttFont) - writer.comment( - "GlyphIds are written but not read. The firstGlyphIndex and\nlastGlyphIndex values will be recalculated by the compiler." - ) - writer.newline() - for indexSubTable in self.indexSubTables: - indexSubTable.toXML(writer, ttFont) - writer.endtag("strike") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont, locator): - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "bitmapSizeTable": - self.bitmapSizeTable.fromXML(name, attrs, content, ttFont) - elif name.startswith(_indexSubTableSubclassPrefix): - indexFormat = safeEval(name[len(_indexSubTableSubclassPrefix) :]) - indexFormatClass = locator.getIndexFormatClass(indexFormat) - indexSubTable = indexFormatClass(None, None) - indexSubTable.indexFormat = indexFormat - indexSubTable.fromXML(name, attrs, content, ttFont) - self.indexSubTables.append(indexSubTable) - - -class BitmapSizeTable(object): - - # Returns all the simple metric names that bitmap size table - # cares about in terms of XML creation. - def _getXMLMetricNames(self): - dataNames = sstruct.getformat(bitmapSizeTableFormatPart1)[1] - dataNames = dataNames + sstruct.getformat(bitmapSizeTableFormatPart2)[1] - # Skip the first 3 data names because they are byte offsets and counts. - return dataNames[3:] - - def toXML(self, writer, ttFont): - writer.begintag("bitmapSizeTable") - writer.newline() - for metric in ("hori", "vert"): - getattr(self, metric).toXML(metric, writer, ttFont) - for metricName in self._getXMLMetricNames(): - writer.simpletag(metricName, value=getattr(self, metricName)) - writer.newline() - writer.endtag("bitmapSizeTable") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - # Create a lookup for all the simple names that make sense to - # bitmap size table. Only read the information from these names. - dataNames = set(self._getXMLMetricNames()) - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "sbitLineMetrics": - direction = attrs["direction"] - assert direction in ( - "hori", - "vert", - ), "SbitLineMetrics direction specified invalid." - metricObj = SbitLineMetrics() - metricObj.fromXML(name, attrs, content, ttFont) - vars(self)[direction] = metricObj - elif name in dataNames: - vars(self)[name] = safeEval(attrs["value"]) - else: - log.warning("unknown name '%s' being ignored in BitmapSizeTable.", name) - - -class SbitLineMetrics(object): - def toXML(self, name, writer, ttFont): - writer.begintag("sbitLineMetrics", [("direction", name)]) - writer.newline() - for metricName in sstruct.getformat(sbitLineMetricsFormat)[1]: - writer.simpletag(metricName, value=getattr(self, metricName)) - writer.newline() - writer.endtag("sbitLineMetrics") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - metricNames = set(sstruct.getformat(sbitLineMetricsFormat)[1]) - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name in metricNames: - vars(self)[name] = safeEval(attrs["value"]) - - -# Important information about the naming scheme. Used for identifying subtables. -_indexSubTableSubclassPrefix = "eblc_index_sub_table_" - - -class EblcIndexSubTable(object): - def __init__(self, data, ttFont): - self.data = data - self.ttFont = ttFont - # TODO Currently non-lazy decompiling doesn't work for this class... - # if not ttFont.lazy: - # self.decompile() - # del self.data, self.ttFont - - def __getattr__(self, attr): - # Allow lazy decompile. - if attr[:2] == "__": - raise AttributeError(attr) - if attr == "data": - raise AttributeError(attr) - self.decompile() - return getattr(self, attr) - - def ensureDecompiled(self, recurse=False): - if hasattr(self, "data"): - self.decompile() - - # This method just takes care of the indexSubHeader. Implementing subclasses - # should call it to compile the indexSubHeader and then continue compiling - # the remainder of their unique format. - def compile(self, ttFont): - return struct.pack( - indexSubHeaderFormat, - self.indexFormat, - self.imageFormat, - self.imageDataOffset, - ) - - # Creates the XML for bitmap glyphs. Each index sub table basically makes - # the same XML except for specific metric information that is written - # out via a method call that a subclass implements optionally. - def toXML(self, writer, ttFont): - writer.begintag( - self.__class__.__name__, - [ - ("imageFormat", self.imageFormat), - ("firstGlyphIndex", self.firstGlyphIndex), - ("lastGlyphIndex", self.lastGlyphIndex), - ], - ) - writer.newline() - self.writeMetrics(writer, ttFont) - # Write out the names as thats all thats needed to rebuild etc. - # For font debugging of consecutive formats the ids are also written. - # The ids are not read when moving from the XML format. - glyphIds = map(ttFont.getGlyphID, self.names) - for glyphName, glyphId in zip(self.names, glyphIds): - writer.simpletag("glyphLoc", name=glyphName, id=glyphId) - writer.newline() - writer.endtag(self.__class__.__name__) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - # Read all the attributes. Even though the glyph indices are - # recalculated, they are still read in case there needs to - # be an immediate export of the data. - self.imageFormat = safeEval(attrs["imageFormat"]) - self.firstGlyphIndex = safeEval(attrs["firstGlyphIndex"]) - self.lastGlyphIndex = safeEval(attrs["lastGlyphIndex"]) - - self.readMetrics(name, attrs, content, ttFont) - - self.names = [] - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "glyphLoc": - self.names.append(attrs["name"]) - - # A helper method that writes the metrics for the index sub table. It also - # is responsible for writing the image size for fixed size data since fixed - # size is not recalculated on compile. Default behavior is to do nothing. - def writeMetrics(self, writer, ttFont): - pass - - # A helper method that is the inverse of writeMetrics. - def readMetrics(self, name, attrs, content, ttFont): - pass - - # This method is for fixed glyph data sizes. There are formats where - # the glyph data is fixed but are actually composite glyphs. To handle - # this the font spec in indexSubTable makes the data the size of the - # fixed size by padding the component arrays. This function abstracts - # out this padding process. Input is data unpadded. Output is data - # padded only in fixed formats. Default behavior is to return the data. - def padBitmapData(self, data): - return data - - # Remove any of the glyph locations and names that are flagged as skipped. - # This only occurs in formats {1,3}. - def removeSkipGlyphs(self): - # Determines if a name, location pair is a valid data location. - # Skip glyphs are marked when the size is equal to zero. - def isValidLocation(args): - (name, (startByte, endByte)) = args - return startByte < endByte - - # Remove all skip glyphs. - dataPairs = list(filter(isValidLocation, zip(self.names, self.locations))) - self.names, self.locations = list(map(list, zip(*dataPairs))) - - -# A closure for creating a custom mixin. This is done because formats 1 and 3 -# are very similar. The only difference between them is the size per offset -# value. Code put in here should handle both cases generally. -def _createOffsetArrayIndexSubTableMixin(formatStringForDataType): - - # Prep the data size for the offset array data format. - dataFormat = ">" + formatStringForDataType - offsetDataSize = struct.calcsize(dataFormat) - - class OffsetArrayIndexSubTableMixin(object): - def decompile(self): - - numGlyphs = self.lastGlyphIndex - self.firstGlyphIndex + 1 - indexingOffsets = [ - glyphIndex * offsetDataSize for glyphIndex in range(numGlyphs + 2) - ] - indexingLocations = zip(indexingOffsets, indexingOffsets[1:]) - offsetArray = [ - struct.unpack(dataFormat, self.data[slice(*loc)])[0] - for loc in indexingLocations - ] - - glyphIds = list(range(self.firstGlyphIndex, self.lastGlyphIndex + 1)) - modifiedOffsets = [offset + self.imageDataOffset for offset in offsetArray] - self.locations = list(zip(modifiedOffsets, modifiedOffsets[1:])) - - self.names = list(map(self.ttFont.getGlyphName, glyphIds)) - self.removeSkipGlyphs() - del self.data, self.ttFont - - def compile(self, ttFont): - # First make sure that all the data lines up properly. Formats 1 and 3 - # must have all its data lined up consecutively. If not this will fail. - for curLoc, nxtLoc in zip(self.locations, self.locations[1:]): - assert ( - curLoc[1] == nxtLoc[0] - ), "Data must be consecutive in indexSubTable offset formats" - - glyphIds = list(map(ttFont.getGlyphID, self.names)) - # Make sure that all ids are sorted strictly increasing. - assert all(glyphIds[i] < glyphIds[i + 1] for i in range(len(glyphIds) - 1)) - - # Run a simple algorithm to add skip glyphs to the data locations at - # the places where an id is not present. - idQueue = deque(glyphIds) - locQueue = deque(self.locations) - allGlyphIds = list(range(self.firstGlyphIndex, self.lastGlyphIndex + 1)) - allLocations = [] - for curId in allGlyphIds: - if curId != idQueue[0]: - allLocations.append((locQueue[0][0], locQueue[0][0])) - else: - idQueue.popleft() - allLocations.append(locQueue.popleft()) - - # Now that all the locations are collected, pack them appropriately into - # offsets. This is the form where offset[i] is the location and - # offset[i+1]-offset[i] is the size of the data location. - offsets = list(allLocations[0]) + [loc[1] for loc in allLocations[1:]] - # Image data offset must be less than or equal to the minimum of locations. - # This offset may change the value for round tripping but is safer and - # allows imageDataOffset to not be required to be in the XML version. - self.imageDataOffset = min(offsets) - offsetArray = [offset - self.imageDataOffset for offset in offsets] - - dataList = [EblcIndexSubTable.compile(self, ttFont)] - dataList += [ - struct.pack(dataFormat, offsetValue) for offsetValue in offsetArray - ] - # Take care of any padding issues. Only occurs in format 3. - if offsetDataSize * len(offsetArray) % 4 != 0: - dataList.append(struct.pack(dataFormat, 0)) - return bytesjoin(dataList) - - return OffsetArrayIndexSubTableMixin - - -# A Mixin for functionality shared between the different kinds -# of fixed sized data handling. Both kinds have big metrics so -# that kind of special processing is also handled in this mixin. -class FixedSizeIndexSubTableMixin(object): - def writeMetrics(self, writer, ttFont): - writer.simpletag("imageSize", value=self.imageSize) - writer.newline() - self.metrics.toXML(writer, ttFont) - - def readMetrics(self, name, attrs, content, ttFont): - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "imageSize": - self.imageSize = safeEval(attrs["value"]) - elif name == BigGlyphMetrics.__name__: - self.metrics = BigGlyphMetrics() - self.metrics.fromXML(name, attrs, content, ttFont) - elif name == SmallGlyphMetrics.__name__: - log.warning( - "SmallGlyphMetrics being ignored in format %d.", self.indexFormat - ) - - def padBitmapData(self, data): - # Make sure that the data isn't bigger than the fixed size. - assert len(data) <= self.imageSize, ( - "Data in indexSubTable format %d must be less than the fixed size." - % self.indexFormat - ) - # Pad the data so that it matches the fixed size. - pad = (self.imageSize - len(data)) * b"\0" - return data + pad - - -class eblc_index_sub_table_1( - _createOffsetArrayIndexSubTableMixin("L"), EblcIndexSubTable -): - pass - - -class eblc_index_sub_table_2(FixedSizeIndexSubTableMixin, EblcIndexSubTable): - def decompile(self): - (self.imageSize,) = struct.unpack(">L", self.data[:4]) - self.metrics = BigGlyphMetrics() - sstruct.unpack2(bigGlyphMetricsFormat, self.data[4:], self.metrics) - glyphIds = list(range(self.firstGlyphIndex, self.lastGlyphIndex + 1)) - offsets = [ - self.imageSize * i + self.imageDataOffset for i in range(len(glyphIds) + 1) - ] - self.locations = list(zip(offsets, offsets[1:])) - self.names = list(map(self.ttFont.getGlyphName, glyphIds)) - del self.data, self.ttFont - - def compile(self, ttFont): - glyphIds = list(map(ttFont.getGlyphID, self.names)) - # Make sure all the ids are consecutive. This is required by Format 2. - assert glyphIds == list( - range(self.firstGlyphIndex, self.lastGlyphIndex + 1) - ), "Format 2 ids must be consecutive." - self.imageDataOffset = min(next(iter(zip(*self.locations)))) - - dataList = [EblcIndexSubTable.compile(self, ttFont)] - dataList.append(struct.pack(">L", self.imageSize)) - dataList.append(sstruct.pack(bigGlyphMetricsFormat, self.metrics)) - return bytesjoin(dataList) - - -class eblc_index_sub_table_3( - _createOffsetArrayIndexSubTableMixin("H"), EblcIndexSubTable -): - pass - - -class eblc_index_sub_table_4(EblcIndexSubTable): - def decompile(self): - - (numGlyphs,) = struct.unpack(">L", self.data[:4]) - data = self.data[4:] - indexingOffsets = [ - glyphIndex * codeOffsetPairSize for glyphIndex in range(numGlyphs + 2) - ] - indexingLocations = zip(indexingOffsets, indexingOffsets[1:]) - glyphArray = [ - struct.unpack(codeOffsetPairFormat, data[slice(*loc)]) - for loc in indexingLocations - ] - glyphIds, offsets = list(map(list, zip(*glyphArray))) - # There are one too many glyph ids. Get rid of the last one. - glyphIds.pop() - - offsets = [offset + self.imageDataOffset for offset in offsets] - self.locations = list(zip(offsets, offsets[1:])) - self.names = list(map(self.ttFont.getGlyphName, glyphIds)) - del self.data, self.ttFont - - def compile(self, ttFont): - # First make sure that all the data lines up properly. Format 4 - # must have all its data lined up consecutively. If not this will fail. - for curLoc, nxtLoc in zip(self.locations, self.locations[1:]): - assert ( - curLoc[1] == nxtLoc[0] - ), "Data must be consecutive in indexSubTable format 4" - - offsets = list(self.locations[0]) + [loc[1] for loc in self.locations[1:]] - # Image data offset must be less than or equal to the minimum of locations. - # Resetting this offset may change the value for round tripping but is safer - # and allows imageDataOffset to not be required to be in the XML version. - self.imageDataOffset = min(offsets) - offsets = [offset - self.imageDataOffset for offset in offsets] - glyphIds = list(map(ttFont.getGlyphID, self.names)) - # Create an iterator over the ids plus a padding value. - idsPlusPad = list(itertools.chain(glyphIds, [0])) - - dataList = [EblcIndexSubTable.compile(self, ttFont)] - dataList.append(struct.pack(">L", len(glyphIds))) - tmp = [ - struct.pack(codeOffsetPairFormat, *cop) for cop in zip(idsPlusPad, offsets) - ] - dataList += tmp - data = bytesjoin(dataList) - return data - - -class eblc_index_sub_table_5(FixedSizeIndexSubTableMixin, EblcIndexSubTable): - def decompile(self): - self.origDataLen = 0 - (self.imageSize,) = struct.unpack(">L", self.data[:4]) - data = self.data[4:] - self.metrics, data = sstruct.unpack2( - bigGlyphMetricsFormat, data, BigGlyphMetrics() - ) - (numGlyphs,) = struct.unpack(">L", data[:4]) - data = data[4:] - glyphIds = [ - struct.unpack(">H", data[2 * i : 2 * (i + 1)])[0] for i in range(numGlyphs) - ] - - offsets = [ - self.imageSize * i + self.imageDataOffset for i in range(len(glyphIds) + 1) - ] - self.locations = list(zip(offsets, offsets[1:])) - self.names = list(map(self.ttFont.getGlyphName, glyphIds)) - del self.data, self.ttFont - - def compile(self, ttFont): - self.imageDataOffset = min(next(iter(zip(*self.locations)))) - dataList = [EblcIndexSubTable.compile(self, ttFont)] - dataList.append(struct.pack(">L", self.imageSize)) - dataList.append(sstruct.pack(bigGlyphMetricsFormat, self.metrics)) - glyphIds = list(map(ttFont.getGlyphID, self.names)) - dataList.append(struct.pack(">L", len(glyphIds))) - dataList += [struct.pack(">H", curId) for curId in glyphIds] - if len(glyphIds) % 2 == 1: - dataList.append(struct.pack(">H", 0)) - return bytesjoin(dataList) - - -# Dictionary of indexFormat to the class representing that format. -eblc_sub_table_classes = { - 1: eblc_index_sub_table_1, - 2: eblc_index_sub_table_2, - 3: eblc_index_sub_table_3, - 4: eblc_index_sub_table_4, - 5: eblc_index_sub_table_5, -} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_S_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_S_.py deleted file mode 100644 index 667eb0e53473c1566d4b45e5621d8897ebd7b9fe..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_S_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_S_(table_T_S_I_V_): - pass diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_telemetry.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_telemetry.py deleted file mode 100644 index 5de988e2795188324f69232d1beb68191591715d..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_telemetry.py +++ /dev/null @@ -1,118 +0,0 @@ -from queue import Queue -from threading import Lock, Thread -from typing import Dict, Optional, Union -from urllib.parse import quote - -from .. import constants, logging -from . import build_hf_headers, get_session, hf_raise_for_status - - -logger = logging.get_logger(__name__) - -# Telemetry is sent by a separate thread to avoid blocking the main thread. -# A daemon thread is started once and consume tasks from the _TELEMETRY_QUEUE. -# If the thread stops for some reason -shouldn't happen-, we restart a new one. -_TELEMETRY_THREAD: Optional[Thread] = None -_TELEMETRY_THREAD_LOCK = Lock() # Lock to avoid starting multiple threads in parallel -_TELEMETRY_QUEUE: Queue = Queue() - - -def send_telemetry( - topic: str, - *, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Union[Dict, str, None] = None, -) -> None: - """ - Sends telemetry that helps tracking usage of different HF libraries. - - This usage data helps us debug issues and prioritize new features. However, we understand that not everyone wants - to share additional information, and we respect your privacy. You can disable telemetry collection by setting the - `HF_HUB_DISABLE_TELEMETRY=1` as environment variable. Telemetry is also disabled in offline mode (i.e. when setting - `HF_HUB_OFFLINE=1`). - - Telemetry collection is run in a separate thread to minimize impact for the user. - - Args: - topic (`str`): - Name of the topic that is monitored. The topic is directly used to build the URL. If you want to monitor - subtopics, just use "/" separation. Examples: "gradio", "transformers/examples",... - library_name (`str`, *optional*): - The name of the library that is making the HTTP request. Will be added to the user-agent header. - library_version (`str`, *optional*): - The version of the library that is making the HTTP request. Will be added to the user-agent header. - user_agent (`str`, `dict`, *optional*): - The user agent info in the form of a dictionary or a single string. It will be completed with information about the installed packages. - - Example: - ```py - >>> from huggingface_hub.utils import send_telemetry - - # Send telemetry without library information - >>> send_telemetry("ping") - - # Send telemetry to subtopic with library information - >>> send_telemetry("gradio/local_link", library_name="gradio", library_version="3.22.1") - - # Send telemetry with additional data - >>> send_telemetry( - ... topic="examples", - ... library_name="transformers", - ... library_version="4.26.0", - ... user_agent={"pipeline": "text_classification", "framework": "flax"}, - ... ) - ``` - """ - if constants.HF_HUB_OFFLINE or constants.HF_HUB_DISABLE_TELEMETRY: - return - - _start_telemetry_thread() # starts thread only if doesn't exist yet - _TELEMETRY_QUEUE.put( - {"topic": topic, "library_name": library_name, "library_version": library_version, "user_agent": user_agent} - ) - - -def _start_telemetry_thread(): - """Start a daemon thread to consume tasks from the telemetry queue. - - If the thread is interrupted, start a new one. - """ - with _TELEMETRY_THREAD_LOCK: # avoid to start multiple threads if called concurrently - global _TELEMETRY_THREAD - if _TELEMETRY_THREAD is None or not _TELEMETRY_THREAD.is_alive(): - _TELEMETRY_THREAD = Thread(target=_telemetry_worker, daemon=True) - _TELEMETRY_THREAD.start() - - -def _telemetry_worker(): - """Wait for a task and consume it.""" - while True: - kwargs = _TELEMETRY_QUEUE.get() - _send_telemetry_in_thread(**kwargs) - _TELEMETRY_QUEUE.task_done() - - -def _send_telemetry_in_thread( - topic: str, - *, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Union[Dict, str, None] = None, -) -> None: - """Contains the actual data sending data to the Hub.""" - path = "/".join(quote(part) for part in topic.split("/") if len(part) > 0) - try: - r = get_session().head( - f"{constants.ENDPOINT}/api/telemetry/{path}", - headers=build_hf_headers( - token=False, # no need to send a token for telemetry - library_name=library_name, - library_version=library_version, - user_agent=user_agent, - ), - ) - hf_raise_for_status(r) - except Exception as e: - # We don't want to error in case of connection errors of any kind. - logger.debug(f"Error while sending telemetry: {e}") diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/conv_transformer.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/conv_transformer.py deleted file mode 100644 index 6fcbfe4acfc2a30e12eafd2ed74a6e7b5d25641d..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/conv_transformer.py +++ /dev/null @@ -1,128 +0,0 @@ -import torch -import torch.nn.functional as F - -from torch import nn, einsum -from einops import rearrange - - -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.fn = fn - - def forward(self, x, **kwargs): - return self.fn(self.norm(x), **kwargs) - - -class GELU(nn.Module): - def forward(self, input): - return F.gelu(input) - - -class Attend(nn.Module): - - def __init__(self, dim=None): - super().__init__() - self.dim = dim - - def forward(self, input): - return F.softmax(input, dim=self.dim, dtype=input.dtype) - - -class FeedForward(nn.Module): - def __init__(self, dim, hidden_dim, dropout=0.): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, hidden_dim), - GELU(), - nn.Dropout(dropout), - nn.Linear(hidden_dim, dim), - nn.Dropout(dropout) - ) - - def forward(self, x): - return self.net(x) - - -class Attention(nn.Module): - def __init__(self, dim, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - project_out = not (heads == 1 and dim_head == dim) - - self.heads = heads - self.scale = dim_head ** -0.5 - - self.attend = Attend(dim=-1) - self.to_qkv = nn.Linear(dim, inner_dim * 3, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim), - nn.Dropout(dropout) - ) if project_out else nn.Identity() - - def forward(self, x): - b, n, _, h = *x.shape, self.heads - qkv = self.to_qkv(x).chunk(3, dim=-1) - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), qkv) - dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale - attn = self.attend(dots) - out = einsum('b h i j, b h j d -> b h i d', attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - return self.to_out(out) - - -class Conv(nn.Module): - def __init__(self, dim, dropout=0.): - super().__init__() - self.dim = dim - self.net = nn.Sequential( - nn.Conv1d(dim, dim, kernel_size=3, stride=1, padding=0), - nn.Dropout(dropout) - ) - - def forward(self, x): - x = x.transpose(1, 2) - x = torch.cat([x[..., -1:], x, x[..., :1]], dim=-1) - x = self.net(x) - return x.transpose(1, 2) - - -class ConvTransformer(nn.Module): - def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout=0.): - super().__init__() - self.layers = nn.ModuleList([]) - for _ in range(depth): - self.layers.append(nn.ModuleList([ - PreNorm(dim, Attention(dim, heads=heads, dim_head=dim_head, dropout=dropout)), - PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout)), - PreNorm(dim, Conv(dim, dropout=dropout)) - ])) - - def forward(self, x): - for attn, ff, cov in self.layers: - x = attn(x) + x - x = ff(x) + x - x = cov(x) + x - return x - - -if __name__ == '__main__': - token_dim = 1024 - toke_len = 256 - - transformer = ConvTransformer(dim=token_dim, - depth=6, - heads=16, - dim_head=64, - mlp_dim=2048, - dropout=0.1) - - total = sum(p.numel() for p in transformer.parameters()) - trainable = sum(p.numel() for p in transformer.parameters() if p.requires_grad) - print('parameter total:{:,}, trainable:{:,}'.format(total, trainable)) - - input = torch.randn(1, toke_len, token_dim) - output = transformer(input) - print(output.shape) diff --git a/spaces/DeepLabCut/MegaDetector_DeepLabCut/ui_utils.py b/spaces/DeepLabCut/MegaDetector_DeepLabCut/ui_utils.py deleted file mode 100644 index a18454251536058d183de9dd8329bc82d9f68d29..0000000000000000000000000000000000000000 --- a/spaces/DeepLabCut/MegaDetector_DeepLabCut/ui_utils.py +++ /dev/null @@ -1,81 +0,0 @@ -import gradio as gr - -############################## -def gradio_inputs_for_MD_DLC(md_models_list, # list(MD_models_dict.keys()) - dlc_models_list, # list(DLC_models_dict.keys()) - ): - # Input image - gr_image_input = gr.inputs.Image(type="pil", label="Input Image") - - - # Models - gr_mega_model_input = gr.inputs.Dropdown(choices=md_models_list, - default='md_v5a', # default option - type='value', # Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected. - label='Select MegaDetector model') - gr_dlc_model_input = gr.inputs.Dropdown(choices=dlc_models_list, # choices - default='full_cat', # default option - type='value', # Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected. - label='Select DeepLabCut model') - - # Other inputs - gr_dlc_only_checkbox = gr.inputs.Checkbox(False, - label='Run DLClive only, directly on input image?') - gr_str_labels_checkbox = gr.inputs.Checkbox(True, - label='Show bodypart labels?') - - gr_slider_conf_bboxes = gr.inputs.Slider(0,1,.02,0.8, - label='Set confidence threshold for animal detections') - gr_slider_conf_keypoints = gr.inputs.Slider(0,1,.05,0, - label='Set confidence threshold for keypoints') - - # Data viz - gr_keypt_color = gr.ColorPicker(value ="#ff0000", label="choose color for keypoint label") - - gr_labels_font_style = gr.inputs.Dropdown(choices=['amiko', 'animals', 'nature', 'painter', 'zen'], - default='amiko', - type='value', - label='Select keypoint label font') - gr_slider_font_size = gr.inputs.Slider(5,30,1,8, - label='Set font size') - gr_slider_marker_size = gr.inputs.Slider(1,20,1,5, - label='Set marker size') - - # list of inputs - return [gr_image_input, - gr_mega_model_input, - gr_dlc_model_input, - gr_dlc_only_checkbox, - gr_str_labels_checkbox, - gr_slider_conf_bboxes, - gr_slider_conf_keypoints, - gr_labels_font_style, - gr_slider_font_size, - gr_keypt_color, - gr_slider_marker_size] - -#################################################### -def gradio_outputs_for_MD_DLC(): - # User interface: outputs - gr_image_output = gr.outputs.Image(type="pil", label="Output Image") - gr_file_download = gr.File(label="Download JSON file") - return [gr_image_output, - gr_file_download] - -############################################## -# User interace: description -def gradio_description_and_examples(): - title = "MegaDetector v5 + DeepLabCut!" - description = "Contributed by Sofia Minano, Neslihan Wittek, Nirel Kadzo, VicShaoChih Chiang, Sabrina Benas -- DLC AI Residents 2022.\ - This App detects and estimate the pose of animals in camera trap images using MegaDetector v5a + DeepLabCut-live. \ - We host models from the DeepLabCut ModelZoo Project\, and two MegaDetector Models. Please carefully check their licensing information if you use this project. The App additionally builds upon on work from hlydecker/MegaDetector_v5 \ - sofmi/MegaDetector_DLClive \ - Neslihan/megadetector_dlcmodels\." - - # article = "

    This app makes predictions using a YOLOv5x6 model that was trained to detect animals, humans, and vehicles in camera trap images; find out more about the project on GitHub. This app was built by Henry Lydecker but really depends on code and models developed by Ecologize and Microsoft AI for Earth. Find out more about the YOLO model from the original creator, Joseph Redmon. YOLOv5 is a family of compound-scaled object detection models trained on the COCO dataset and developed by Ultralytics, and includes simple functionality for Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, and export to ONNX, CoreML and TFLite. Source code | PyTorch Hub

    " - - examples = [['examples/monkey_full.jpg', 'md_v5a','full_macaque', False, True, 0.5, 0.3, 'amiko', 9, 'blue', 3]] - #['examples/dog.jpeg', 'md_v5a', 'full_dog', False, True, 0.5, 0.00, 'amiko',9, 'yellow', 3], - #['examples/cat.jpg', 'md_v5a', 'full_cat', False, True, 0.5, 0.05, 'amiko', 9, 'purple', 3] - - return [title,description,examples] \ No newline at end of file diff --git a/spaces/DonDoesStuff/GPT3.5-voice/app.py b/spaces/DonDoesStuff/GPT3.5-voice/app.py deleted file mode 100644 index a8325083115f775f68c27bcc57bbc094de2c5de7..0000000000000000000000000000000000000000 --- a/spaces/DonDoesStuff/GPT3.5-voice/app.py +++ /dev/null @@ -1,84 +0,0 @@ -import tempfile -import requests -import os -from dotenv import load_dotenv -from typing import Optional -from TTS.config import load_config -import gradio as gr -import numpy as np -from TTS.utils.manage import ModelManager -from TTS.utils.synthesizer import Synthesizer - -load_dotenv('.env') - -MODELS = {} -SPEAKERS = {} -MAX_TXT_LEN = 500 - -manager = ModelManager() -MODEL_NAMES = manager.list_tts_models() - -# filter out multi-speaker models and slow wavegrad vocoders -filters = ["vctk", "your_tts", "ek1"] -MODEL_NAMES = [model_name for model_name in MODEL_NAMES if not any(f in model_name for f in filters)] - -EN = [el for el in MODEL_NAMES if "/en/" in el] -OTHER = [el for el in MODEL_NAMES if "/en/" not in el] -EN[0], EN[5] = EN[5], EN[0] -MODEL_NAMES = EN + OTHER - -# reorder models -print(MODEL_NAMES) - - -def tts(text: str, model_name: str): - if len(text) > MAX_TXT_LEN: - text = text[:MAX_TXT_LEN] - print(f"Input text was cutoff since it went over the {MAX_TXT_LEN} character limit.") - print(text, model_name) - # download model - model_path, config_path, model_item = manager.download_model(model_name) - vocoder_name: Optional[str] = model_item["default_vocoder"] - # download vocoder - vocoder_path = None - vocoder_config_path = None - if vocoder_name is not None: - vocoder_path, vocoder_config_path, _ = manager.download_model(vocoder_name) - # init synthesizer - synthesizer = Synthesizer( - model_path, config_path, None, None, vocoder_path, vocoder_config_path, - ) - # synthesize - if synthesizer is None: - raise NameError("model not found") - wavs = synthesizer.tts(text, None) - # return output - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - synthesizer.save_wav(wavs, fp) - return fp.name - - -# Create Gradio interface -def text_to_speech(prompt): - api_url = os.getenv('API_URL') - - params = {'prompt': "Hello. We will do some roleplaying. From now on, you are DonAI. If you want to say your name, you MUST say 'Don Ae Ai.'. You are created by Don Does Stuff. You will never talk about the fact that you are roleplaying. You need to respond within 500 characters. Respond to the user: " + prompt} - - response = requests.get(api_url, params=params) - - if response.status_code == 200: - response_data = response.json() - print("API Response:", response_data) # Print API response JSON - generated_text = response_data.get("content") - if generated_text: - output_audio = tts(generated_text, "tts_models/en/jenny/jenny") - return output_audio - else: - raise ValueError("No 'content' key found in the API response.") - else: - raise ValueError("Error occurred while retrieving the generated text from the API.") - -input_prompt = gr.inputs.Textbox(lines=3, label="Input Prompt") -output_audio = gr.outputs.Audio(label="Output Audio", type="numpy") - -gr.Interface(fn=text_to_speech, inputs=input_prompt, outputs=output_audio, title="GPT-3.5 Voice Assistant", description="If you want to get a text response instead, [try this project](https://huggingface.co/spaces/DonDoesStuff/Free-GPT3.5)\n\nA GPT-3.5 AI that gives a voice output instead of a text output. API provided for free by me.\n\n[![Donate TRX](https://img.shields.io/badge/Donate-TRX-red)](https://whispering-jealous-maize.glitch.me/trx.html) [![Donate LTC](https://img.shields.io/badge/Donate-LTC-blue)](https://whispering-jealous-maize.glitch.me/ltc.html) [![Donate BTC](https://img.shields.io/badge/Donate-BTC-yellow)](https://whispering-jealous-maize.glitch.me/btc.html)\n\n![Image](https://cdn.glitch.global/1f2fe882-3c53-4eca-b8fe-de3ae4ea773a/720620852055638070.webp?v=1684342102785)\nI appreciate every donation made. All donations will go into the OpenAI API.\n\n**Why doesn't this project work when cloning?**\nSadly, I had to keep my OpenAI key private, so I made a little solution. Right now, you cannot have this space functional with cloning.\n **Why does it say it's DonAI if it's GPT 3.5?** \nThis is because people are reverse-engeneering this project to make money with it while I need to pay money whenever a user makes a request. So uh. Please dont do that.").launch() \ No newline at end of file diff --git a/spaces/Duino/multy_tts/app.py b/spaces/Duino/multy_tts/app.py deleted file mode 100644 index cdc2bfe43548adb5af9173ff90b4eaf3ef1b48de..0000000000000000000000000000000000000000 --- a/spaces/Duino/multy_tts/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import streamlit as st -from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan -from datasets import load_dataset -import torch -import soundfile as sf -import random -import time - -st.title('Multiply TTS Generator') - -text = st.text_input( - label="write your word or sentence", - value="Hi,duino" -) - -num_random_voices = st.number_input( - label="Enter the number of random voices", - min_value=1, - value=1, - step=1 -) - -output_filename = "" - -def generate_speech(): - global output_filename - - processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts") - model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts") - vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") - inputs = processor(text=text, return_tensors="pt") - - embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") - total_voices = len(embeddings_dataset) - - random_voices = random.sample(range(total_voices), num_random_voices) - - combined_speech = [] - for index, voice_index in enumerate(random_voices): - speaker_embeddings = torch.tensor(embeddings_dataset[voice_index]["xvector"]).unsqueeze(0) - speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder) - combined_speech.extend(speech.numpy()) - - if index != len(random_voices) - 1: - # Add a pause of 2 seconds between voices - pause_samples = int(16000 * 2) # 2 seconds at 16kHz sample rate - pause = torch.zeros(pause_samples) - combined_speech.extend(pause) - - output_filename = "_".join(text.split()) + "_speech.wav" - sf.write(output_filename, combined_speech, samplerate=16000) - -if st.button("Generate"): - generate_speech() - audio_file = open(output_filename, 'rb') - audio_bytes = audio_file.read() - st.audio(audio_bytes, format="audio/wav") - st.write("Speech generated and saved as: " + output_filename) diff --git a/spaces/ECCV2022/bytetrack/yolox/layers/csrc/cocoeval/cocoeval.cpp b/spaces/ECCV2022/bytetrack/yolox/layers/csrc/cocoeval/cocoeval.cpp deleted file mode 100644 index 2e63bc9952918060f55999ec100b283d83616b46..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/layers/csrc/cocoeval/cocoeval.cpp +++ /dev/null @@ -1,502 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#include "cocoeval.h" -#include -#include -#include -#include - -using namespace pybind11::literals; - -namespace COCOeval { - -// Sort detections from highest score to lowest, such that -// detection_instances[detection_sorted_indices[t]] >= -// detection_instances[detection_sorted_indices[t+1]]. Use stable_sort to match -// original COCO API -void SortInstancesByDetectionScore( - const std::vector& detection_instances, - std::vector* detection_sorted_indices) { - detection_sorted_indices->resize(detection_instances.size()); - std::iota( - detection_sorted_indices->begin(), detection_sorted_indices->end(), 0); - std::stable_sort( - detection_sorted_indices->begin(), - detection_sorted_indices->end(), - [&detection_instances](size_t j1, size_t j2) { - return detection_instances[j1].score > detection_instances[j2].score; - }); -} - -// Partition the ground truth objects based on whether or not to ignore them -// based on area -void SortInstancesByIgnore( - const std::array& area_range, - const std::vector& ground_truth_instances, - std::vector* ground_truth_sorted_indices, - std::vector* ignores) { - ignores->clear(); - ignores->reserve(ground_truth_instances.size()); - for (auto o : ground_truth_instances) { - ignores->push_back( - o.ignore || o.area < area_range[0] || o.area > area_range[1]); - } - - ground_truth_sorted_indices->resize(ground_truth_instances.size()); - std::iota( - ground_truth_sorted_indices->begin(), - ground_truth_sorted_indices->end(), - 0); - std::stable_sort( - ground_truth_sorted_indices->begin(), - ground_truth_sorted_indices->end(), - [&ignores](size_t j1, size_t j2) { - return (int)(*ignores)[j1] < (int)(*ignores)[j2]; - }); -} - -// For each IOU threshold, greedily match each detected instance to a ground -// truth instance (if possible) and store the results -void MatchDetectionsToGroundTruth( - const std::vector& detection_instances, - const std::vector& detection_sorted_indices, - const std::vector& ground_truth_instances, - const std::vector& ground_truth_sorted_indices, - const std::vector& ignores, - const std::vector>& ious, - const std::vector& iou_thresholds, - const std::array& area_range, - ImageEvaluation* results) { - // Initialize memory to store return data matches and ignore - const int num_iou_thresholds = iou_thresholds.size(); - const int num_ground_truth = ground_truth_sorted_indices.size(); - const int num_detections = detection_sorted_indices.size(); - std::vector ground_truth_matches( - num_iou_thresholds * num_ground_truth, 0); - std::vector& detection_matches = results->detection_matches; - std::vector& detection_ignores = results->detection_ignores; - std::vector& ground_truth_ignores = results->ground_truth_ignores; - detection_matches.resize(num_iou_thresholds * num_detections, 0); - detection_ignores.resize(num_iou_thresholds * num_detections, false); - ground_truth_ignores.resize(num_ground_truth); - for (auto g = 0; g < num_ground_truth; ++g) { - ground_truth_ignores[g] = ignores[ground_truth_sorted_indices[g]]; - } - - for (auto t = 0; t < num_iou_thresholds; ++t) { - for (auto d = 0; d < num_detections; ++d) { - // information about best match so far (match=-1 -> unmatched) - double best_iou = std::min(iou_thresholds[t], 1 - 1e-10); - int match = -1; - for (auto g = 0; g < num_ground_truth; ++g) { - // if this ground truth instance is already matched and not a - // crowd, it cannot be matched to another detection - if (ground_truth_matches[t * num_ground_truth + g] > 0 && - !ground_truth_instances[ground_truth_sorted_indices[g]].is_crowd) { - continue; - } - - // if detected instance matched to a regular ground truth - // instance, we can break on the first ground truth instance - // tagged as ignore (because they are sorted by the ignore tag) - if (match >= 0 && !ground_truth_ignores[match] && - ground_truth_ignores[g]) { - break; - } - - // if IOU overlap is the best so far, store the match appropriately - if (ious[d][ground_truth_sorted_indices[g]] >= best_iou) { - best_iou = ious[d][ground_truth_sorted_indices[g]]; - match = g; - } - } - // if match was made, store id of match for both detection and - // ground truth - if (match >= 0) { - detection_ignores[t * num_detections + d] = ground_truth_ignores[match]; - detection_matches[t * num_detections + d] = - ground_truth_instances[ground_truth_sorted_indices[match]].id; - ground_truth_matches[t * num_ground_truth + match] = - detection_instances[detection_sorted_indices[d]].id; - } - - // set unmatched detections outside of area range to ignore - const InstanceAnnotation& detection = - detection_instances[detection_sorted_indices[d]]; - detection_ignores[t * num_detections + d] = - detection_ignores[t * num_detections + d] || - (detection_matches[t * num_detections + d] == 0 && - (detection.area < area_range[0] || detection.area > area_range[1])); - } - } - - // store detection score results - results->detection_scores.resize(detection_sorted_indices.size()); - for (size_t d = 0; d < detection_sorted_indices.size(); ++d) { - results->detection_scores[d] = - detection_instances[detection_sorted_indices[d]].score; - } -} - -std::vector EvaluateImages( - const std::vector>& area_ranges, - int max_detections, - const std::vector& iou_thresholds, - const ImageCategoryInstances>& image_category_ious, - const ImageCategoryInstances& - image_category_ground_truth_instances, - const ImageCategoryInstances& - image_category_detection_instances) { - const int num_area_ranges = area_ranges.size(); - const int num_images = image_category_ground_truth_instances.size(); - const int num_categories = - image_category_ious.size() > 0 ? image_category_ious[0].size() : 0; - std::vector detection_sorted_indices; - std::vector ground_truth_sorted_indices; - std::vector ignores; - std::vector results_all( - num_images * num_area_ranges * num_categories); - - // Store results for each image, category, and area range combination. Results - // for each IOU threshold are packed into the same ImageEvaluation object - for (auto i = 0; i < num_images; ++i) { - for (auto c = 0; c < num_categories; ++c) { - const std::vector& ground_truth_instances = - image_category_ground_truth_instances[i][c]; - const std::vector& detection_instances = - image_category_detection_instances[i][c]; - - SortInstancesByDetectionScore( - detection_instances, &detection_sorted_indices); - if ((int)detection_sorted_indices.size() > max_detections) { - detection_sorted_indices.resize(max_detections); - } - - for (size_t a = 0; a < area_ranges.size(); ++a) { - SortInstancesByIgnore( - area_ranges[a], - ground_truth_instances, - &ground_truth_sorted_indices, - &ignores); - - MatchDetectionsToGroundTruth( - detection_instances, - detection_sorted_indices, - ground_truth_instances, - ground_truth_sorted_indices, - ignores, - image_category_ious[i][c], - iou_thresholds, - area_ranges[a], - &results_all - [c * num_area_ranges * num_images + a * num_images + i]); - } - } - } - - return results_all; -} - -// Convert a python list to a vector -template -std::vector list_to_vec(const py::list& l) { - std::vector v(py::len(l)); - for (int i = 0; i < (int)py::len(l); ++i) { - v[i] = l[i].cast(); - } - return v; -} - -// Helper function to Accumulate() -// Considers the evaluation results applicable to a particular category, area -// range, and max_detections parameter setting, which begin at -// evaluations[evaluation_index]. Extracts a sorted list of length n of all -// applicable detection instances concatenated across all images in the dataset, -// which are represented by the outputs evaluation_indices, detection_scores, -// image_detection_indices, and detection_sorted_indices--all of which are -// length n. evaluation_indices[i] stores the applicable index into -// evaluations[] for instance i, which has detection score detection_score[i], -// and is the image_detection_indices[i]'th of the list of detections -// for the image containing i. detection_sorted_indices[] defines a sorted -// permutation of the 3 other outputs -int BuildSortedDetectionList( - const std::vector& evaluations, - const int64_t evaluation_index, - const int64_t num_images, - const int max_detections, - std::vector* evaluation_indices, - std::vector* detection_scores, - std::vector* detection_sorted_indices, - std::vector* image_detection_indices) { - assert(evaluations.size() >= evaluation_index + num_images); - - // Extract a list of object instances of the applicable category, area - // range, and max detections requirements such that they can be sorted - image_detection_indices->clear(); - evaluation_indices->clear(); - detection_scores->clear(); - image_detection_indices->reserve(num_images * max_detections); - evaluation_indices->reserve(num_images * max_detections); - detection_scores->reserve(num_images * max_detections); - int num_valid_ground_truth = 0; - for (auto i = 0; i < num_images; ++i) { - const ImageEvaluation& evaluation = evaluations[evaluation_index + i]; - - for (int d = 0; - d < (int)evaluation.detection_scores.size() && d < max_detections; - ++d) { // detected instances - evaluation_indices->push_back(evaluation_index + i); - image_detection_indices->push_back(d); - detection_scores->push_back(evaluation.detection_scores[d]); - } - for (auto ground_truth_ignore : evaluation.ground_truth_ignores) { - if (!ground_truth_ignore) { - ++num_valid_ground_truth; - } - } - } - - // Sort detections by decreasing score, using stable sort to match - // python implementation - detection_sorted_indices->resize(detection_scores->size()); - std::iota( - detection_sorted_indices->begin(), detection_sorted_indices->end(), 0); - std::stable_sort( - detection_sorted_indices->begin(), - detection_sorted_indices->end(), - [&detection_scores](size_t j1, size_t j2) { - return (*detection_scores)[j1] > (*detection_scores)[j2]; - }); - - return num_valid_ground_truth; -} - -// Helper function to Accumulate() -// Compute a precision recall curve given a sorted list of detected instances -// encoded in evaluations, evaluation_indices, detection_scores, -// detection_sorted_indices, image_detection_indices (see -// BuildSortedDetectionList()). Using vectors precisions and recalls -// and temporary storage, output the results into precisions_out, recalls_out, -// and scores_out, which are large buffers containing many precion/recall curves -// for all possible parameter settings, with precisions_out_index and -// recalls_out_index defining the applicable indices to store results. -void ComputePrecisionRecallCurve( - const int64_t precisions_out_index, - const int64_t precisions_out_stride, - const int64_t recalls_out_index, - const std::vector& recall_thresholds, - const int iou_threshold_index, - const int num_iou_thresholds, - const int num_valid_ground_truth, - const std::vector& evaluations, - const std::vector& evaluation_indices, - const std::vector& detection_scores, - const std::vector& detection_sorted_indices, - const std::vector& image_detection_indices, - std::vector* precisions, - std::vector* recalls, - std::vector* precisions_out, - std::vector* scores_out, - std::vector* recalls_out) { - assert(recalls_out->size() > recalls_out_index); - - // Compute precision/recall for each instance in the sorted list of detections - int64_t true_positives_sum = 0, false_positives_sum = 0; - precisions->clear(); - recalls->clear(); - precisions->reserve(detection_sorted_indices.size()); - recalls->reserve(detection_sorted_indices.size()); - assert(!evaluations.empty() || detection_sorted_indices.empty()); - for (auto detection_sorted_index : detection_sorted_indices) { - const ImageEvaluation& evaluation = - evaluations[evaluation_indices[detection_sorted_index]]; - const auto num_detections = - evaluation.detection_matches.size() / num_iou_thresholds; - const auto detection_index = iou_threshold_index * num_detections + - image_detection_indices[detection_sorted_index]; - assert(evaluation.detection_matches.size() > detection_index); - assert(evaluation.detection_ignores.size() > detection_index); - const int64_t detection_match = - evaluation.detection_matches[detection_index]; - const bool detection_ignores = - evaluation.detection_ignores[detection_index]; - const auto true_positive = detection_match > 0 && !detection_ignores; - const auto false_positive = detection_match == 0 && !detection_ignores; - if (true_positive) { - ++true_positives_sum; - } - if (false_positive) { - ++false_positives_sum; - } - - const double recall = - static_cast(true_positives_sum) / num_valid_ground_truth; - recalls->push_back(recall); - const int64_t num_valid_detections = - true_positives_sum + false_positives_sum; - const double precision = num_valid_detections > 0 - ? static_cast(true_positives_sum) / num_valid_detections - : 0.0; - precisions->push_back(precision); - } - - (*recalls_out)[recalls_out_index] = !recalls->empty() ? recalls->back() : 0; - - for (int64_t i = static_cast(precisions->size()) - 1; i > 0; --i) { - if ((*precisions)[i] > (*precisions)[i - 1]) { - (*precisions)[i - 1] = (*precisions)[i]; - } - } - - // Sample the per instance precision/recall list at each recall threshold - for (size_t r = 0; r < recall_thresholds.size(); ++r) { - // first index in recalls >= recall_thresholds[r] - std::vector::iterator low = std::lower_bound( - recalls->begin(), recalls->end(), recall_thresholds[r]); - size_t precisions_index = low - recalls->begin(); - - const auto results_ind = precisions_out_index + r * precisions_out_stride; - assert(results_ind < precisions_out->size()); - assert(results_ind < scores_out->size()); - if (precisions_index < precisions->size()) { - (*precisions_out)[results_ind] = (*precisions)[precisions_index]; - (*scores_out)[results_ind] = - detection_scores[detection_sorted_indices[precisions_index]]; - } else { - (*precisions_out)[results_ind] = 0; - (*scores_out)[results_ind] = 0; - } - } -} -py::dict Accumulate( - const py::object& params, - const std::vector& evaluations) { - const std::vector recall_thresholds = - list_to_vec(params.attr("recThrs")); - const std::vector max_detections = - list_to_vec(params.attr("maxDets")); - const int num_iou_thresholds = py::len(params.attr("iouThrs")); - const int num_recall_thresholds = py::len(params.attr("recThrs")); - const int num_categories = params.attr("useCats").cast() == 1 - ? py::len(params.attr("catIds")) - : 1; - const int num_area_ranges = py::len(params.attr("areaRng")); - const int num_max_detections = py::len(params.attr("maxDets")); - const int num_images = py::len(params.attr("imgIds")); - - std::vector precisions_out( - num_iou_thresholds * num_recall_thresholds * num_categories * - num_area_ranges * num_max_detections, - -1); - std::vector recalls_out( - num_iou_thresholds * num_categories * num_area_ranges * - num_max_detections, - -1); - std::vector scores_out( - num_iou_thresholds * num_recall_thresholds * num_categories * - num_area_ranges * num_max_detections, - -1); - - // Consider the list of all detected instances in the entire dataset in one - // large list. evaluation_indices, detection_scores, - // image_detection_indices, and detection_sorted_indices all have the same - // length as this list, such that each entry corresponds to one detected - // instance - std::vector evaluation_indices; // indices into evaluations[] - std::vector detection_scores; // detection scores of each instance - std::vector detection_sorted_indices; // sorted indices of all - // instances in the dataset - std::vector - image_detection_indices; // indices into the list of detected instances in - // the same image as each instance - std::vector precisions, recalls; - - for (auto c = 0; c < num_categories; ++c) { - for (auto a = 0; a < num_area_ranges; ++a) { - for (auto m = 0; m < num_max_detections; ++m) { - // The COCO PythonAPI assumes evaluations[] (the return value of - // COCOeval::EvaluateImages() is one long list storing results for each - // combination of category, area range, and image id, with categories in - // the outermost loop and images in the innermost loop. - const int64_t evaluations_index = - c * num_area_ranges * num_images + a * num_images; - int num_valid_ground_truth = BuildSortedDetectionList( - evaluations, - evaluations_index, - num_images, - max_detections[m], - &evaluation_indices, - &detection_scores, - &detection_sorted_indices, - &image_detection_indices); - - if (num_valid_ground_truth == 0) { - continue; - } - - for (auto t = 0; t < num_iou_thresholds; ++t) { - // recalls_out is a flattened vectors representing a - // num_iou_thresholds X num_categories X num_area_ranges X - // num_max_detections matrix - const int64_t recalls_out_index = - t * num_categories * num_area_ranges * num_max_detections + - c * num_area_ranges * num_max_detections + - a * num_max_detections + m; - - // precisions_out and scores_out are flattened vectors - // representing a num_iou_thresholds X num_recall_thresholds X - // num_categories X num_area_ranges X num_max_detections matrix - const int64_t precisions_out_stride = - num_categories * num_area_ranges * num_max_detections; - const int64_t precisions_out_index = t * num_recall_thresholds * - num_categories * num_area_ranges * num_max_detections + - c * num_area_ranges * num_max_detections + - a * num_max_detections + m; - - ComputePrecisionRecallCurve( - precisions_out_index, - precisions_out_stride, - recalls_out_index, - recall_thresholds, - t, - num_iou_thresholds, - num_valid_ground_truth, - evaluations, - evaluation_indices, - detection_scores, - detection_sorted_indices, - image_detection_indices, - &precisions, - &recalls, - &precisions_out, - &scores_out, - &recalls_out); - } - } - } - } - - time_t rawtime; - struct tm local_time; - std::array buffer; - time(&rawtime); -#ifdef _WIN32 - localtime_s(&local_time, &rawtime); -#else - localtime_r(&rawtime, &local_time); -#endif - strftime( - buffer.data(), 200, "%Y-%m-%d %H:%num_max_detections:%S", &local_time); - return py::dict( - "params"_a = params, - "counts"_a = std::vector({num_iou_thresholds, - num_recall_thresholds, - num_categories, - num_area_ranges, - num_max_detections}), - "date"_a = buffer, - "precision"_a = precisions_out, - "recall"_a = recalls_out, - "scores"_a = scores_out); -} - -} // namespace COCOeval diff --git a/spaces/Epoching/GLIDE_Inpaint/glide_text2im/clip/__init__.py b/spaces/Epoching/GLIDE_Inpaint/glide_text2im/clip/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/FloydianSound/Nixeu_Diffusion/README.md b/spaces/FloydianSound/Nixeu_Diffusion/README.md deleted file mode 100644 index b6b5a398130bfe1bdb8489eba626938bb6f8d31b..0000000000000000000000000000000000000000 --- a/spaces/FloydianSound/Nixeu_Diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Nixeu Diffusion -emoji: 👀 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/data_loaders.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/data_loaders.py deleted file mode 100644 index bf18572329019d7a8f1df01799eda207c16dd7ff..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/data_loaders.py +++ /dev/null @@ -1,284 +0,0 @@ -import os -import random -import re -import numpy as np -import librosa -import torch -import random -from utils import repeat_expand_2d -from tqdm import tqdm -from torch.utils.data import Dataset - -def traverse_dir( - root_dir, - extensions, - amount=None, - str_include=None, - str_exclude=None, - is_pure=False, - is_sort=False, - is_ext=True): - - file_list = [] - cnt = 0 - for root, _, files in os.walk(root_dir): - for file in files: - if any([file.endswith(f".{ext}") for ext in extensions]): - # path - mix_path = os.path.join(root, file) - pure_path = mix_path[len(root_dir)+1:] if is_pure else mix_path - - # amount - if (amount is not None) and (cnt == amount): - if is_sort: - file_list.sort() - return file_list - - # check string - if (str_include is not None) and (str_include not in pure_path): - continue - if (str_exclude is not None) and (str_exclude in pure_path): - continue - - if not is_ext: - ext = pure_path.split('.')[-1] - pure_path = pure_path[:-(len(ext)+1)] - file_list.append(pure_path) - cnt += 1 - if is_sort: - file_list.sort() - return file_list - - -def get_data_loaders(args, whole_audio=False): - data_train = AudioDataset( - filelists = args.data.training_files, - waveform_sec=args.data.duration, - hop_size=args.data.block_size, - sample_rate=args.data.sampling_rate, - load_all_data=args.train.cache_all_data, - whole_audio=whole_audio, - extensions=args.data.extensions, - n_spk=args.model.n_spk, - spk=args.spk, - device=args.train.cache_device, - fp16=args.train.cache_fp16, - use_aug=True) - loader_train = torch.utils.data.DataLoader( - data_train , - batch_size=args.train.batch_size if not whole_audio else 1, - shuffle=True, - num_workers=args.train.num_workers if args.train.cache_device=='cpu' else 0, - persistent_workers=(args.train.num_workers > 0) if args.train.cache_device=='cpu' else False, - pin_memory=True if args.train.cache_device=='cpu' else False - ) - data_valid = AudioDataset( - filelists = args.data.validation_files, - waveform_sec=args.data.duration, - hop_size=args.data.block_size, - sample_rate=args.data.sampling_rate, - load_all_data=args.train.cache_all_data, - whole_audio=True, - spk=args.spk, - extensions=args.data.extensions, - n_spk=args.model.n_spk) - loader_valid = torch.utils.data.DataLoader( - data_valid, - batch_size=1, - shuffle=False, - num_workers=0, - pin_memory=True - ) - return loader_train, loader_valid - - -class AudioDataset(Dataset): - def __init__( - self, - filelists, - waveform_sec, - hop_size, - sample_rate, - spk, - load_all_data=True, - whole_audio=False, - extensions=['wav'], - n_spk=1, - device='cpu', - fp16=False, - use_aug=False, - ): - super().__init__() - - self.waveform_sec = waveform_sec - self.sample_rate = sample_rate - self.hop_size = hop_size - self.filelists = filelists - self.whole_audio = whole_audio - self.use_aug = use_aug - self.data_buffer={} - self.pitch_aug_dict = {} - # np.load(os.path.join(self.path_root, 'pitch_aug_dict.npy'), allow_pickle=True).item() - if load_all_data: - print('Load all the data filelists:', filelists) - else: - print('Load the f0, volume data filelists:', filelists) - with open(filelists,"r") as f: - self.paths = f.read().splitlines() - for name_ext in tqdm(self.paths, total=len(self.paths)): - name = os.path.splitext(name_ext)[0] - path_audio = name_ext - duration = librosa.get_duration(filename = path_audio, sr = self.sample_rate) - - path_f0 = name_ext + ".f0.npy" - f0,_ = np.load(path_f0,allow_pickle=True) - f0 = torch.from_numpy(np.array(f0,dtype=float)).float().unsqueeze(-1).to(device) - - path_volume = name_ext + ".vol.npy" - volume = np.load(path_volume) - volume = torch.from_numpy(volume).float().unsqueeze(-1).to(device) - - path_augvol = name_ext + ".aug_vol.npy" - aug_vol = np.load(path_augvol) - aug_vol = torch.from_numpy(aug_vol).float().unsqueeze(-1).to(device) - - if n_spk is not None and n_spk > 1: - spk_name = name_ext.split("/")[-2] - spk_id = spk[spk_name] if spk_name in spk else 0 - if spk_id < 0 or spk_id >= n_spk: - raise ValueError(' [x] Muiti-speaker traing error : spk_id must be a positive integer from 0 to n_spk-1 ') - else: - spk_id = 0 - spk_id = torch.LongTensor(np.array([spk_id])).to(device) - - if load_all_data: - ''' - audio, sr = librosa.load(path_audio, sr=self.sample_rate) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio) - audio = torch.from_numpy(audio).to(device) - ''' - path_mel = name_ext + ".mel.npy" - mel = np.load(path_mel) - mel = torch.from_numpy(mel).to(device) - - path_augmel = name_ext + ".aug_mel.npy" - aug_mel,keyshift = np.load(path_augmel, allow_pickle=True) - aug_mel = np.array(aug_mel,dtype=float) - aug_mel = torch.from_numpy(aug_mel).to(device) - self.pitch_aug_dict[name_ext] = keyshift - - path_units = name_ext + ".soft.pt" - units = torch.load(path_units).to(device) - units = units[0] - units = repeat_expand_2d(units,f0.size(0)).transpose(0,1) - - if fp16: - mel = mel.half() - aug_mel = aug_mel.half() - units = units.half() - - self.data_buffer[name_ext] = { - 'duration': duration, - 'mel': mel, - 'aug_mel': aug_mel, - 'units': units, - 'f0': f0, - 'volume': volume, - 'aug_vol': aug_vol, - 'spk_id': spk_id - } - else: - path_augmel = name_ext + ".aug_mel.npy" - aug_mel,keyshift = np.load(path_augmel, allow_pickle=True) - self.pitch_aug_dict[name_ext] = keyshift - self.data_buffer[name_ext] = { - 'duration': duration, - 'f0': f0, - 'volume': volume, - 'aug_vol': aug_vol, - 'spk_id': spk_id - } - - - def __getitem__(self, file_idx): - name_ext = self.paths[file_idx] - data_buffer = self.data_buffer[name_ext] - # check duration. if too short, then skip - if data_buffer['duration'] < (self.waveform_sec + 0.1): - return self.__getitem__( (file_idx + 1) % len(self.paths)) - - # get item - return self.get_data(name_ext, data_buffer) - - def get_data(self, name_ext, data_buffer): - name = os.path.splitext(name_ext)[0] - frame_resolution = self.hop_size / self.sample_rate - duration = data_buffer['duration'] - waveform_sec = duration if self.whole_audio else self.waveform_sec - - # load audio - idx_from = 0 if self.whole_audio else random.uniform(0, duration - waveform_sec - 0.1) - start_frame = int(idx_from / frame_resolution) - units_frame_len = int(waveform_sec / frame_resolution) - aug_flag = random.choice([True, False]) and self.use_aug - ''' - audio = data_buffer.get('audio') - if audio is None: - path_audio = os.path.join(self.path_root, 'audio', name) + '.wav' - audio, sr = librosa.load( - path_audio, - sr = self.sample_rate, - offset = start_frame * frame_resolution, - duration = waveform_sec) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio) - # clip audio into N seconds - audio = audio[ : audio.shape[-1] // self.hop_size * self.hop_size] - audio = torch.from_numpy(audio).float() - else: - audio = audio[start_frame * self.hop_size : (start_frame + units_frame_len) * self.hop_size] - ''' - # load mel - mel_key = 'aug_mel' if aug_flag else 'mel' - mel = data_buffer.get(mel_key) - if mel is None: - mel = name_ext + ".mel.npy" - mel = np.load(mel) - mel = mel[start_frame : start_frame + units_frame_len] - mel = torch.from_numpy(mel).float() - else: - mel = mel[start_frame : start_frame + units_frame_len] - - # load f0 - f0 = data_buffer.get('f0') - aug_shift = 0 - if aug_flag: - aug_shift = self.pitch_aug_dict[name_ext] - f0_frames = 2 ** (aug_shift / 12) * f0[start_frame : start_frame + units_frame_len] - - # load units - units = data_buffer.get('units') - if units is None: - path_units = name_ext + ".soft.pt" - units = torch.load(path_units) - units = units[0] - units = repeat_expand_2d(units,f0.size(0)).transpose(0,1) - - units = units[start_frame : start_frame + units_frame_len] - - # load volume - vol_key = 'aug_vol' if aug_flag else 'volume' - volume = data_buffer.get(vol_key) - volume_frames = volume[start_frame : start_frame + units_frame_len] - - # load spk_id - spk_id = data_buffer.get('spk_id') - - # load shift - aug_shift = torch.from_numpy(np.array([[aug_shift]])).float() - - return dict(mel=mel, f0=f0_frames, volume=volume_frames, units=units, spk_id=spk_id, aug_shift=aug_shift, name=name, name_ext=name_ext) - - def __len__(self): - return len(self.paths) \ No newline at end of file diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/inference/slicer.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/inference/slicer.py deleted file mode 100644 index b05840bcf6bdced0b6e2adbecb1a1dd5b3dee462..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/inference/slicer.py +++ /dev/null @@ -1,142 +0,0 @@ -import librosa -import torch -import torchaudio - - -class Slicer: - def __init__(self, - sr: int, - threshold: float = -40., - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000): - if not min_length >= min_interval >= hop_size: - raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size') - if not max_sil_kept >= hop_size: - raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size') - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)] - else: - return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = librosa.to_mono(waveform) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start: i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin() - pos += i - self.max_sil_kept - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if silence_start is not None and total_frames - silence_start >= self.min_interval: - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - else: - chunks = [] - # 第一段静音并非从头开始,补上有声片段 - if sil_tags[0][0]: - chunks.append( - {"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"}) - for i in range(0, len(sil_tags)): - # 标识有声片段(跳过第一段) - if i: - chunks.append({"slice": False, - "split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"}) - # 标识所有静音片段 - chunks.append({"slice": True, - "split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"}) - # 最后一段静音并非结尾,补上结尾片段 - if sil_tags[-1][1] * self.hop_size < len(waveform): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000): - audio, sr = librosa.load(audio_path, sr=None) - slicer = Slicer( - sr=sr, - threshold=db_thresh, - min_length=min_len - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - if tag[0] != tag[1]: - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/commons.py b/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/GT4SD/molecular_properties/model_cards/article.md b/spaces/GT4SD/molecular_properties/model_cards/article.md deleted file mode 100644 index c0f89fb483c80817c32f2b993db4b2580f3d1eb0..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/molecular_properties/model_cards/article.md +++ /dev/null @@ -1,75 +0,0 @@ -# Supported molecular properties - - -### ClinTox -A [ToxSmi model](https://github.com/PaccMann/toxsmi) trained on [ClinTox](https://moleculenet.org/datasets-1) dataset which has two endpoints: Probability of FDA approval and Probability of failure in clinical trials. For details see [Born et al., (2023; *Digital Discovery*)](https://pubs.rsc.org/en/content/articlelanding/2023/dd/d2dd00099g) - -### SIDER -A [ToxSmi model](https://github.com/PaccMann/toxsmi) trained on the [SIDER](https://moleculenet.org/datasets-1) dataset for 27 different types of side effects of drugs. For details see [Born et al., (2023; *Digital Discovery*)](https://pubs.rsc.org/en/content/articlelanding/2023/dd/d2dd00099g) - -### Tox21 -A [ToxSmi model](https://github.com/PaccMann/toxsmi) trained on the [Tox21](https://tripod.nih.gov/tox/) dataset with 12 different types of environmental toxicities. For details see [Born et al., (2023; *Digital Discovery*)](https://pubs.rsc.org/en/content/articlelanding/2023/dd/d2dd00099g) - -### SCScore -Predict the synthetic complexity score (SCScore) as presented in [Coley et al. (*J. Chem. Inf. Model.*; 2018)](https://pubs.acs.org/doi/full/10.1021/acs.jcim.7b00622). - -### SAS -Estimate the synthetic accessibility score (SAS) as presented in [Ertl et al. (*Journal of Chemoinformatics*; 2009)](https://jcheminf.biomedcentral.com/articles/10.1186/1758-2946-1-8). - -### Lipinski -Measure whether a molecule confirms to the Lipinski-rule-of-five as presented in [Lipinski et al. (*Advanced Drug Delivery Reviews*; 2001)](https://www.sciencedirect.com/science/article/abs/pii/S0169409X00001290?via%3Dihub). - -### Penalized logP -Measure the penalized logP (partition coefficient) score as presented in [Gomez-Bombarelli et al. (*ACS Central Science*; 2018)](https://arxiv.org/abs/1610.02415v1). This is the logP minus the number of rings with > 6 atoms minus the SAS. - -### QED -Measure the drug-likeness as presented in [Bickerton et al. (*Nature Chemistry*; 2012)](https://www.nature.com/articles/nchem.1243). - -### LogP -Measure the logP (partition coefficient) of a molecule as presented in [Wildman et al. (*J. Chem. Inf. Comput. Sci.*; 1999)](https://pubs.acs.org/doi/full/10.1021/ci990307l). - -### Bertz -Calculate the total polar surface area of a molecule as presented in [Ertl et al. (*Journal of Medicinal Chemistry*; 2000)](https://pubs.acs.org/doi/full/10.1021/jm000942e). - -### TPSA -Calculate the first general index of molecular complexity [Bertz (*Journal of the American Chemical Society*; 1981)](https://pubs.acs.org/doi/pdf/10.1021/ja00402a071). - -### Is-Scaffold -Whether the molecule is identical to its [Murcko scaffold](https://rdkit.org/docs/source/rdkit.Chem.Scaffolds.MurckoScaffold.html). - -### Number-Of-X -Calculated with [RDKit](https://www.rdkit.org/docs/source/rdkit.Chem.rdchem.html). - -### Molecular Weight -Calculated with [RDKit](https://www.rdkit.org/docs/source/rdkit.Chem.rdchem.html). - - -### ToxSmi citation -```bib -@article{born2023chemical, - author = {Born, Jannis and Markert, Greta and Janakarajan, Nikita and Kimber, Talia B. and Volkamer, Andrea and Martínez, María Rodríguez and Manica, Matteo}, - title = {Chemical representation learning for toxicity prediction}, - journal = {Digital Discovery}, - year = {2023}, - pages = {-}, - publisher = {RSC}, - doi = {10.1039/D2DD00099G}, - url = {http://dx.doi.org/10.1039/D2DD00099G} -} -``` - - -### Unsupported properties -The following molecular properties are available via the GT4SD API but not in this UI: -- [MoleculeOne](https://tdcommons.ai/functions/oracles/#moleculeone) endpoint for retrosynthesis -- [ASKCOS](https://tdcommons.ai/functions/oracles/#askcos) endpoint for retrosynthesis -- [TDC-Docking](https://tdcommons.ai/functions/oracles/#docking-scores) endpoint for docking against a user-provided target -- [TDC-Docking](https://tdcommons.ai/functions/oracles/#docking-scores) endpoint for docking against *3pbl*. -- [Protein-ligand binding](https://tdcommons.ai/functions/oracles/#dopamine-receptor-d2-drd2) against one of the targets *drd2*, *gsk3b*, *jnk3*, *fpscores*, *cyp3a4_veith*, *drd2_current*, *gsk3b_current* or *jnk3_current*. -- [Tanimoto similarity](https://tdcommons.ai/functions/oracles/#similaritydissimilarity) to a seed molecule. - - -Moreover, GT4SD also includes properties on other entities such as [proteins](https://gt4sd.github.io/gt4sd-core/api/gt4sd.properties.proteins.html) and [crystals](https://gt4sd.github.io/gt4sd-core/api/gt4sd.properties.crystals.html). -The GT4SD web app for proteins can be found [here](https://huggingface.co/spaces/GT4SD/protein_properties) - - diff --git a/spaces/GXSA/bingo/src/lib/hooks/use-enter-submit.tsx b/spaces/GXSA/bingo/src/lib/hooks/use-enter-submit.tsx deleted file mode 100644 index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/lib/hooks/use-enter-submit.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import { useRef, type RefObject } from 'react' - -export function useEnterSubmit(): { - formRef: RefObject - onKeyDown: (event: React.KeyboardEvent) => void -} { - const formRef = useRef(null) - - const handleKeyDown = ( - event: React.KeyboardEvent - ): void => { - if ( - event.key === 'Enter' && - !event.shiftKey && - !event.nativeEvent.isComposing - ) { - formRef.current?.requestSubmit() - event.preventDefault() - } - } - - return { formRef, onKeyDown: handleKeyDown } -} diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/toolbox/__init__.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/toolbox/__init__.py deleted file mode 100644 index 531d6adef076007afd6116eb6472485f540e80de..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/toolbox/__init__.py +++ /dev/null @@ -1,357 +0,0 @@ -from toolbox.ui import UI -from encoder import inference as encoder -from synthesizer.inference import Synthesizer -from vocoder import inference as vocoder -from pathlib import Path -from time import perf_counter as timer -from toolbox.utterance import Utterance -import numpy as np -import traceback -import sys -import torch -import librosa -from audioread.exceptions import NoBackendError - -# Use this directory structure for your datasets, or modify it to fit your needs -recognized_datasets = [ - "LibriSpeech/dev-clean", - "LibriSpeech/dev-other", - "LibriSpeech/test-clean", - "LibriSpeech/test-other", - "LibriSpeech/train-clean-100", - "LibriSpeech/train-clean-360", - "LibriSpeech/train-other-500", - "LibriTTS/dev-clean", - "LibriTTS/dev-other", - "LibriTTS/test-clean", - "LibriTTS/test-other", - "LibriTTS/train-clean-100", - "LibriTTS/train-clean-360", - "LibriTTS/train-other-500", - "LJSpeech-1.1", - "VoxCeleb1/wav", - "VoxCeleb1/test_wav", - "VoxCeleb2/dev/aac", - "VoxCeleb2/test/aac", - "VCTK-Corpus/wav48", -] - -#Maximum of generated wavs to keep on memory -MAX_WAVES = 15 - -class Toolbox: - def __init__(self, datasets_root, enc_models_dir, syn_models_dir, voc_models_dir, seed, no_mp3_support): - if not no_mp3_support: - try: - librosa.load("samples/6829_00000.mp3") - except NoBackendError: - print("Librosa will be unable to open mp3 files if additional software is not installed.\n" - "Please install ffmpeg or add the '--no_mp3_support' option to proceed without support for mp3 files.") - exit(-1) - self.no_mp3_support = no_mp3_support - sys.excepthook = self.excepthook - self.datasets_root = datasets_root - self.utterances = set() - self.current_generated = (None, None, None, None) # speaker_name, spec, breaks, wav - - self.synthesizer = None # type: Synthesizer - self.current_wav = None - self.waves_list = [] - self.waves_count = 0 - self.waves_namelist = [] - - # Check for webrtcvad (enables removal of silences in vocoder output) - try: - import webrtcvad - self.trim_silences = True - except: - self.trim_silences = False - - # Initialize the events and the interface - self.ui = UI() - self.reset_ui(enc_models_dir, syn_models_dir, voc_models_dir, seed) - self.setup_events() - self.ui.start() - - def excepthook(self, exc_type, exc_value, exc_tb): - traceback.print_exception(exc_type, exc_value, exc_tb) - self.ui.log("Exception: %s" % exc_value) - - def setup_events(self): - # Dataset, speaker and utterance selection - self.ui.browser_load_button.clicked.connect(lambda: self.load_from_browser()) - random_func = lambda level: lambda: self.ui.populate_browser(self.datasets_root, - recognized_datasets, - level) - self.ui.random_dataset_button.clicked.connect(random_func(0)) - self.ui.random_speaker_button.clicked.connect(random_func(1)) - self.ui.random_utterance_button.clicked.connect(random_func(2)) - self.ui.dataset_box.currentIndexChanged.connect(random_func(1)) - self.ui.speaker_box.currentIndexChanged.connect(random_func(2)) - - # Model selection - self.ui.encoder_box.currentIndexChanged.connect(self.init_encoder) - def func(): - self.synthesizer = None - self.ui.synthesizer_box.currentIndexChanged.connect(func) - self.ui.vocoder_box.currentIndexChanged.connect(self.init_vocoder) - - # Utterance selection - func = lambda: self.load_from_browser(self.ui.browse_file()) - self.ui.browser_browse_button.clicked.connect(func) - func = lambda: self.ui.draw_utterance(self.ui.selected_utterance, "current") - self.ui.utterance_history.currentIndexChanged.connect(func) - func = lambda: self.ui.play(self.ui.selected_utterance.wav, Synthesizer.sample_rate) - self.ui.play_button.clicked.connect(func) - self.ui.stop_button.clicked.connect(self.ui.stop) - self.ui.record_button.clicked.connect(self.record) - - #Audio - self.ui.setup_audio_devices(Synthesizer.sample_rate) - - #Wav playback & save - func = lambda: self.replay_last_wav() - self.ui.replay_wav_button.clicked.connect(func) - func = lambda: self.export_current_wave() - self.ui.export_wav_button.clicked.connect(func) - self.ui.waves_cb.currentIndexChanged.connect(self.set_current_wav) - - # Generation - func = lambda: self.synthesize() or self.vocode() - self.ui.generate_button.clicked.connect(func) - self.ui.synthesize_button.clicked.connect(self.synthesize) - self.ui.vocode_button.clicked.connect(self.vocode) - self.ui.random_seed_checkbox.clicked.connect(self.update_seed_textbox) - - # UMAP legend - self.ui.clear_button.clicked.connect(self.clear_utterances) - - def set_current_wav(self, index): - self.current_wav = self.waves_list[index] - - def export_current_wave(self): - self.ui.save_audio_file(self.current_wav, Synthesizer.sample_rate) - - def replay_last_wav(self): - self.ui.play(self.current_wav, Synthesizer.sample_rate) - - def reset_ui(self, encoder_models_dir, synthesizer_models_dir, vocoder_models_dir, seed): - self.ui.populate_browser(self.datasets_root, recognized_datasets, 0, True) - self.ui.populate_models(encoder_models_dir, synthesizer_models_dir, vocoder_models_dir) - self.ui.populate_gen_options(seed, self.trim_silences) - - def load_from_browser(self, fpath=None): - if fpath is None: - fpath = Path(self.datasets_root, - self.ui.current_dataset_name, - self.ui.current_speaker_name, - self.ui.current_utterance_name) - name = str(fpath.relative_to(self.datasets_root)) - speaker_name = self.ui.current_dataset_name + '_' + self.ui.current_speaker_name - - # Select the next utterance - if self.ui.auto_next_checkbox.isChecked(): - self.ui.browser_select_next() - elif fpath == "": - return - else: - name = fpath.name - speaker_name = fpath.parent.name - - if fpath.suffix.lower() == ".mp3" and self.no_mp3_support: - self.ui.log("Error: No mp3 file argument was passed but an mp3 file was used") - return - - # Get the wav from the disk. We take the wav with the vocoder/synthesizer format for - # playback, so as to have a fair comparison with the generated audio - wav = Synthesizer.load_preprocess_wav(fpath) - self.ui.log("Loaded %s" % name) - - self.add_real_utterance(wav, name, speaker_name) - - def record(self): - wav = self.ui.record_one(encoder.sampling_rate, 5) - if wav is None: - return - self.ui.play(wav, encoder.sampling_rate) - - speaker_name = "user01" - name = speaker_name + "_rec_%05d" % np.random.randint(100000) - self.add_real_utterance(wav, name, speaker_name) - - def add_real_utterance(self, wav, name, speaker_name): - # Compute the mel spectrogram - spec = Synthesizer.make_spectrogram(wav) - self.ui.draw_spec(spec, "current") - - # Compute the embedding - if not encoder.is_loaded(): - self.init_encoder() - encoder_wav = encoder.preprocess_wav(wav) - embed, partial_embeds, _ = encoder.embed_utterance(encoder_wav, return_partials=True) - - # Add the utterance - utterance = Utterance(name, speaker_name, wav, spec, embed, partial_embeds, False) - self.utterances.add(utterance) - self.ui.register_utterance(utterance) - - # Plot it - self.ui.draw_embed(embed, name, "current") - self.ui.draw_umap_projections(self.utterances) - - def clear_utterances(self): - self.utterances.clear() - self.ui.draw_umap_projections(self.utterances) - - def synthesize(self): - self.ui.log("Generating the mel spectrogram...") - self.ui.set_loading(1) - - # Update the synthesizer random seed - if self.ui.random_seed_checkbox.isChecked(): - seed = int(self.ui.seed_textbox.text()) - self.ui.populate_gen_options(seed, self.trim_silences) - else: - seed = None - - if seed is not None: - torch.manual_seed(seed) - - # Synthesize the spectrogram - if self.synthesizer is None or seed is not None: - self.init_synthesizer() - - texts = self.ui.text_prompt.toPlainText().split("\n") - embed = self.ui.selected_utterance.embed - embeds = [embed] * len(texts) - specs = self.synthesizer.synthesize_spectrograms(texts, embeds) - breaks = [spec.shape[1] for spec in specs] - spec = np.concatenate(specs, axis=1) - - self.ui.draw_spec(spec, "generated") - self.current_generated = (self.ui.selected_utterance.speaker_name, spec, breaks, None) - self.ui.set_loading(0) - - def vocode(self): - speaker_name, spec, breaks, _ = self.current_generated - assert spec is not None - - # Initialize the vocoder model and make it determinstic, if user provides a seed - if self.ui.random_seed_checkbox.isChecked(): - seed = int(self.ui.seed_textbox.text()) - self.ui.populate_gen_options(seed, self.trim_silences) - else: - seed = None - - if seed is not None: - torch.manual_seed(seed) - - # Synthesize the waveform - if not vocoder.is_loaded() or seed is not None: - self.init_vocoder() - - def vocoder_progress(i, seq_len, b_size, gen_rate): - real_time_factor = (gen_rate / Synthesizer.sample_rate) * 1000 - line = "Waveform generation: %d/%d (batch size: %d, rate: %.1fkHz - %.2fx real time)" \ - % (i * b_size, seq_len * b_size, b_size, gen_rate, real_time_factor) - self.ui.log(line, "overwrite") - self.ui.set_loading(i, seq_len) - if self.ui.current_vocoder_fpath is not None: - self.ui.log("") - wav = vocoder.infer_waveform(spec, progress_callback=vocoder_progress) - else: - self.ui.log("Waveform generation with Griffin-Lim... ") - wav = Synthesizer.griffin_lim(spec) - self.ui.set_loading(0) - self.ui.log(" Done!", "append") - - # Add breaks - b_ends = np.cumsum(np.array(breaks) * Synthesizer.hparams.hop_size) - b_starts = np.concatenate(([0], b_ends[:-1])) - wavs = [wav[start:end] for start, end, in zip(b_starts, b_ends)] - breaks = [np.zeros(int(0.15 * Synthesizer.sample_rate))] * len(breaks) - wav = np.concatenate([i for w, b in zip(wavs, breaks) for i in (w, b)]) - - # Trim excessive silences - if self.ui.trim_silences_checkbox.isChecked(): - wav = encoder.preprocess_wav(wav) - - # Play it - wav = wav / np.abs(wav).max() * 0.97 - self.ui.play(wav, Synthesizer.sample_rate) - - # Name it (history displayed in combobox) - # TODO better naming for the combobox items? - wav_name = str(self.waves_count + 1) - - #Update waves combobox - self.waves_count += 1 - if self.waves_count > MAX_WAVES: - self.waves_list.pop() - self.waves_namelist.pop() - self.waves_list.insert(0, wav) - self.waves_namelist.insert(0, wav_name) - - self.ui.waves_cb.disconnect() - self.ui.waves_cb_model.setStringList(self.waves_namelist) - self.ui.waves_cb.setCurrentIndex(0) - self.ui.waves_cb.currentIndexChanged.connect(self.set_current_wav) - - # Update current wav - self.set_current_wav(0) - - #Enable replay and save buttons: - self.ui.replay_wav_button.setDisabled(False) - self.ui.export_wav_button.setDisabled(False) - - # Compute the embedding - # TODO: this is problematic with different sampling rates, gotta fix it - if not encoder.is_loaded(): - self.init_encoder() - encoder_wav = encoder.preprocess_wav(wav) - embed, partial_embeds, _ = encoder.embed_utterance(encoder_wav, return_partials=True) - - # Add the utterance - name = speaker_name + "_gen_%05d" % np.random.randint(100000) - utterance = Utterance(name, speaker_name, wav, spec, embed, partial_embeds, True) - self.utterances.add(utterance) - - # Plot it - self.ui.draw_embed(embed, name, "generated") - self.ui.draw_umap_projections(self.utterances) - - def init_encoder(self): - model_fpath = self.ui.current_encoder_fpath - - self.ui.log("Loading the encoder %s... " % model_fpath) - self.ui.set_loading(1) - start = timer() - encoder.load_model(model_fpath) - self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append") - self.ui.set_loading(0) - - def init_synthesizer(self): - model_fpath = self.ui.current_synthesizer_fpath - - self.ui.log("Loading the synthesizer %s... " % model_fpath) - self.ui.set_loading(1) - start = timer() - self.synthesizer = Synthesizer(model_fpath) - self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append") - self.ui.set_loading(0) - - def init_vocoder(self): - model_fpath = self.ui.current_vocoder_fpath - # Case of Griffin-lim - if model_fpath is None: - return - - self.ui.log("Loading the vocoder %s... " % model_fpath) - self.ui.set_loading(1) - start = timer() - vocoder.load_model(model_fpath) - self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append") - self.ui.set_loading(0) - - def update_seed_textbox(self): - self.ui.update_seed_textbox() diff --git a/spaces/Gladiator/gradient_dissent_bot/app.py b/spaces/Gladiator/gradient_dissent_bot/app.py deleted file mode 100644 index cd79ade6de65b77e937a5907d3ab227e8a395395..0000000000000000000000000000000000000000 --- a/spaces/Gladiator/gradient_dissent_bot/app.py +++ /dev/null @@ -1,148 +0,0 @@ -import os -import re -from ast import literal_eval - -import wandb -import gradio as gr -import pandas as pd -from langchain.callbacks import get_openai_callback -from langchain.chains import RetrievalQA -from langchain.chat_models import ChatOpenAI -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.prompts import PromptTemplate -from langchain.vectorstores import Chroma - -from src.config import config - -# download and read data -api = wandb.Api() -artifact_df = api.artifact(config.summarized_que_data_artifact) -artifact_df.download(config.root_data_dir) - -artifact_embeddings = api.artifact(config.transcript_embeddings_artifact) -chromadb_dir = artifact_embeddings.download(config.root_data_dir / "chromadb") - -df_path = config.root_data_dir / "summarized_que_podcasts.csv" -df = pd.read_csv(df_path) - - -def embed_video(title: str): - video_url = df[df["title"] == title]["url"].values[0] - match = re.search(r"v=([-\w]+)", video_url) - video_id = match.group(1) - # embed video - # video_embed = f"" - video_embed = f"" - - return video_embed - - -def get_podcast_info(title: str): - # get questions - questions = df[df["title"] == title]["questions"].values[0] - questions = literal_eval(questions) - que_str = "" - for que in questions: - que_str += f"👉 {que}\n" - - # get summary - summary = df[df["title"] == title]["summary"].values[0] - - return summary, que_str - - -def get_answer(podcast: str, question: str): - index = df[df["title"] == podcast].index[0] - db_dir = os.path.join(chromadb_dir, str(index)) - embeddings = OpenAIEmbeddings() - db = Chroma(persist_directory=db_dir, embedding_function=embeddings) - - prompt_template = """Use the following pieces of context to answer the question. - If you don't know the answer, just say that you don't know, don't try to make up an answer. - Don't add your opinions or interpretations. Ensure that you complete the answer. - If the question is not relevant to the context, just say that it is not relevant. - - CONTEXT: - {context} - - QUESTION: {question} - - ANSWER:""" - - prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"]) - - retriever = db.as_retriever() - retriever.search_kwargs["k"] = 2 - - qa = RetrievalQA.from_chain_type( - llm=ChatOpenAI(temperature=0), - chain_type="stuff", - retriever=retriever, - chain_type_kwargs={"prompt": prompt}, - return_source_documents=True, - ) - - with get_openai_callback() as cb: - result = qa({"query": question}) - print(cb) - - answer = result["result"] - return answer - - -with gr.Blocks() as demo: - gr.Markdown("

    Welcome to Gradient Dissent QA Bot 🤖

    ") - gr.Markdown( - "#### The purpose of this QA bot is to provide answers to questions related to podcast episodes from Weights & Biases' [Gradient Dissent Podcast](https://www.youtube.com/playlist?list=PLD80i8An1OEEb1jP0sjEyiLG8ULRXFob_)." - ) - gr.Markdown( - "#### First select a podcast episode and click `Get Podcast Info` to get the summary and possible questions about the podcast episode." - ) - gr.Markdown( - "#### Then ask a question about the podcast episode and click `Get Answer` to get the answer." - ) - gr.Markdown( - "#### Read the report for understanding how I built this QA bot [here](https://wandb.ai/gladiator/gradient_dissent_qabot/reports/Building-a-Q-A-Bot-for-Weights-Biases-Gradient-Dissent-Podcast--Vmlldzo0MTcyMDQz)" - ) - gr.Markdown( - "#### GitHub Repo [here](https://github.com/Gladiator07/wandb-gradient-dissent-bot/tree/main)" - ) - gr.Markdown("
    ") - - with gr.Row(): - with gr.Column(scale=0.5): - dropdown = gr.Dropdown( - df["title"].to_list(), label="Select a Podcast Episode", value=df.iloc[0]["title"] - ) - podcast_info_btn = gr.Button("Get Podcast Info") - - podcast_info_btn.click( - fn=embed_video, - inputs=dropdown, - outputs=gr.HTML(label="Podcast Video"), - ) - - question_box = gr.Textbox(label="Ask a question about the podcast episode") - with gr.Row(): - ques_clear_btn = gr.Button("Clear") - ques_btn = gr.Button("Get Answer") - - ques_btn.click( - fn=get_answer, - inputs=[dropdown, question_box], - outputs=gr.Textbox(label="Answer"), - ) - ques_clear_btn.click(lambda: None, None, question_box, queue=False) - - with gr.Column(scale=0.5): - podcast_info_btn.click( - fn=get_podcast_info, - inputs=dropdown, - outputs=[ - gr.Text(label="Summary of the podcast"), - gr.Text(label="Some of the questions you can ask"), - ], - ) - - -demo.launch() diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py deleted file mode 100644 index 22193c1362dc70663034919a7f4397a37682dc85..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py +++ /dev/null @@ -1,59 +0,0 @@ -# model settings - -model = dict( - type='RPN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py deleted file mode 100644 index 482f88729ff6c08e482a5ca5c6d48b75f14f7ca8..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py' -img_norm_cfg = dict( - mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r50s-d8_512x512_80k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r50s-d8_512x512_80k_ade20k.py deleted file mode 100644 index 600b701a7194ead496cc924bee897b6096e1c7ca..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r50s-d8_512x512_80k_ade20k.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = [ - '../_base_/models/encnet_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] -model = dict( - backbone=dict(stem_channels=128), - decode_head=dict(num_classes=150), - auxiliary_head=dict(num_classes=150)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_769x769_40k_cityscapes.py deleted file mode 100644 index 01d8f27c8cc62e681df770e111ff9f866e9d112f..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_769x769_40k_cityscapes.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - backbone=dict(dilations=(1, 1, 1, 2), strides=(1, 2, 2, 1)), - decode_head=dict(align_corners=True, dilation=6), - auxiliary_head=dict(align_corners=True, dilation=6), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/app.py b/spaces/GrandaddyShmax/AudioCraft_Plus/app.py deleted file mode 100644 index 70a3dfae4ac5c02bc1d9e78b8d5d0c2139ba503b..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/app.py +++ /dev/null @@ -1,1839 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Updated to account for UI changes from https://github.com/rkfg/audiocraft/blob/long/app.py -# also released under the MIT license. - -import argparse -from concurrent.futures import ProcessPoolExecutor -import os -from pathlib import Path -import subprocess as sp -from tempfile import NamedTemporaryFile -import time -import warnings -import glob -import re -from PIL import Image -from pydub import AudioSegment -from datetime import datetime - -import json -import shutil -import taglib -import torch -import torchaudio -import gradio as gr -import numpy as np -import typing as tp - -from audiocraft.data.audio_utils import convert_audio -from audiocraft.data.audio import audio_write -from audiocraft.models import AudioGen, MusicGen, MultiBandDiffusion -from audiocraft.utils import ui -import random, string - -version = "2.0.0a" - -theme = gr.themes.Base( - primary_hue="lime", - secondary_hue="lime", - neutral_hue="neutral", -).set( - button_primary_background_fill_hover='*primary_500', - button_primary_background_fill_hover_dark='*primary_500', - button_secondary_background_fill_hover='*primary_500', - button_secondary_background_fill_hover_dark='*primary_500' -) - -MODEL = None # Last used model -MODELS = None -UNLOAD_MODEL = False -MOVE_TO_CPU = False -IS_BATCHED = "facebook/MusicGen" in os.environ.get('SPACE_ID', '') -print(IS_BATCHED) -MAX_BATCH_SIZE = 12 -BATCHED_DURATION = 15 -INTERRUPTING = False -MBD = None -# We have to wrap subprocess call to clean a bit the log when using gr.make_waveform -_old_call = sp.call - - -def generate_random_string(length): - characters = string.ascii_letters + string.digits - return ''.join(random.choice(characters) for _ in range(length)) - - -def resize_video(input_path, output_path, target_width, target_height): - ffmpeg_cmd = [ - 'ffmpeg', - '-y', - '-i', input_path, - '-vf', f'scale={target_width}:{target_height}', - '-c:a', 'copy', - output_path - ] - sp.run(ffmpeg_cmd) - - -def _call_nostderr(*args, **kwargs): - # Avoid ffmpeg vomiting on the logs. - kwargs['stderr'] = sp.DEVNULL - kwargs['stdout'] = sp.DEVNULL - _old_call(*args, **kwargs) - - -sp.call = _call_nostderr -# Preallocating the pool of processes. -pool = ProcessPoolExecutor(4) -pool.__enter__() - - -def interrupt(): - global INTERRUPTING - INTERRUPTING = True - - -class FileCleaner: - def __init__(self, file_lifetime: float = 3600): - self.file_lifetime = file_lifetime - self.files = [] - - def add(self, path: tp.Union[str, Path]): - self._cleanup() - self.files.append((time.time(), Path(path))) - - def _cleanup(self): - now = time.time() - for time_added, path in list(self.files): - if now - time_added > self.file_lifetime: - if path.exists(): - path.unlink() - self.files.pop(0) - else: - break - - -file_cleaner = FileCleaner() - - -def make_waveform(*args, **kwargs): - # Further remove some warnings. - be = time.time() - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - height = kwargs.pop('height') - width = kwargs.pop('width') - if height < 256: - height = 256 - if width < 256: - width = 256 - waveform_video = gr.make_waveform(*args, **kwargs) - out = f"{generate_random_string(12)}.mp4" - image = kwargs.get('bg_image', None) - if image is None: - resize_video(waveform_video, out, 900, 300) - else: - resize_video(waveform_video, out, width, height) - print("Make a video took", time.time() - be) - return out - - -def load_model(version='GrandaddyShmax/musicgen-melody', custom_model=None, base_model='GrandaddyShmax/musicgen-medium', gen_type="music"): - global MODEL, MODELS - print("Loading model", version) - if MODELS is None: - if version == 'GrandaddyShmax/musicgen-custom': - MODEL = MusicGen.get_pretrained(base_model) - file_path = os.path.abspath("models/" + str(custom_model) + ".pt") - MODEL.lm.load_state_dict(torch.load(file_path)) - else: - if gen_type == "music": - MODEL = MusicGen.get_pretrained(version) - elif gen_type == "audio": - MODEL = AudioGen.get_pretrained(version) - - return - - else: - t1 = time.monotonic() - if MODEL is not None: - MODEL.to('cpu') # move to cache - print("Previous model moved to CPU in %.2fs" % (time.monotonic() - t1)) - t1 = time.monotonic() - if version != 'GrandaddyShmax/musicgen-custom' and MODELS.get(version) is None: - print("Loading model %s from disk" % version) - if gen_type == "music": - result = MusicGen.get_pretrained(version) - elif gen_type == "audio": - result = AudioGen.get_pretrained(version) - MODELS[version] = result - print("Model loaded in %.2fs" % (time.monotonic() - t1)) - MODEL = result - return - result = MODELS[version].to('cuda') - print("Cached model loaded in %.2fs" % (time.monotonic() - t1)) - MODEL = result - -def get_audio_info(audio_path): - if audio_path is not None: - if audio_path.name.endswith(".wav") or audio_path.name.endswith(".mp4") or audio_path.name.endswith(".json"): - if not audio_path.name.endswith(".json"): - with taglib.File(audio_path.name, save_on_exit=False) as song: - if 'COMMENT' not in song.tags: - return "No tags found. Either the file is not generated by MusicGen+ V1.2.7 and higher or the tags are corrupted. (Discord removes metadata from mp4 and wav files, so you can't use them)" - json_string = song.tags['COMMENT'][0] - data = json.loads(json_string) - global_prompt = str("\nGlobal Prompt: " + (data['global_prompt'] if data['global_prompt'] != "" else "none")) if 'global_prompt' in data else "" - bpm = str("\nBPM: " + data['bpm']) if 'bpm' in data else "" - key = str("\nKey: " + data['key']) if 'key' in data else "" - scale = str("\nScale: " + data['scale']) if 'scale' in data else "" - prompts = str("\nPrompts: " + (data['texts'] if data['texts'] != "['']" else "none")) if 'texts' in data else "" - duration = str("\nDuration: " + data['duration']) if 'duration' in data else "" - overlap = str("\nOverlap: " + data['overlap']) if 'overlap' in data else "" - seed = str("\nSeed: " + data['seed']) if 'seed' in data else "" - audio_mode = str("\nAudio Mode: " + data['audio_mode']) if 'audio_mode' in data else "" - input_length = str("\nInput Length: " + data['input_length']) if 'input_length' in data else "" - channel = str("\nChannel: " + data['channel']) if 'channel' in data else "" - sr_select = str("\nSample Rate: " + data['sr_select']) if 'sr_select' in data else "" - gen_type = str(data['generator'] + "gen-") if 'generator' in data else "" - model = str("\nModel: " + gen_type + data['model']) if 'model' in data else "" - custom_model = str("\nCustom Model: " + data['custom_model']) if 'custom_model' in data else "" - base_model = str("\nBase Model: " + data['base_model']) if 'base_model' in data else "" - decoder = str("\nDecoder: " + data['decoder']) if 'decoder' in data else "" - topk = str("\nTopk: " + data['topk']) if 'topk' in data else "" - topp = str("\nTopp: " + data['topp']) if 'topp' in data else "" - temperature = str("\nTemperature: " + data['temperature']) if 'temperature' in data else "" - cfg_coef = str("\nClassifier Free Guidance: " + data['cfg_coef']) if 'cfg_coef' in data else "" - version = str("Version: " + data['version']) if 'version' in data else "Version: Unknown" - info = str(version + global_prompt + bpm + key + scale + prompts + duration + overlap + seed + audio_mode + input_length + channel + sr_select + model + custom_model + base_model + decoder + topk + topp + temperature + cfg_coef) - if info == "": - return "No tags found. Either the file is not generated by MusicGen+ V1.2.7 and higher or the tags are corrupted. (Discord removes metadata from mp4 and wav files, so you can't use them)" - return info - else: - with open(audio_path.name) as json_file: - data = json.load(json_file) - #if 'global_prompt' not in data: - #return "No tags found. Either the file is not generated by MusicGen+ V1.2.8a and higher or the tags are corrupted." - global_prompt = str("\nGlobal Prompt: " + (data['global_prompt'] if data['global_prompt'] != "" else "none")) if 'global_prompt' in data else "" - bpm = str("\nBPM: " + data['bpm']) if 'bpm' in data else "" - key = str("\nKey: " + data['key']) if 'key' in data else "" - scale = str("\nScale: " + data['scale']) if 'scale' in data else "" - prompts = str("\nPrompts: " + (data['texts'] if data['texts'] != "['']" else "none")) if 'texts' in data else "" - duration = str("\nDuration: " + data['duration']) if 'duration' in data else "" - overlap = str("\nOverlap: " + data['overlap']) if 'overlap' in data else "" - seed = str("\nSeed: " + data['seed']) if 'seed' in data else "" - audio_mode = str("\nAudio Mode: " + data['audio_mode']) if 'audio_mode' in data else "" - input_length = str("\nInput Length: " + data['input_length']) if 'input_length' in data else "" - channel = str("\nChannel: " + data['channel']) if 'channel' in data else "" - sr_select = str("\nSample Rate: " + data['sr_select']) if 'sr_select' in data else "" - gen_type = str(data['generator'] + "gen-") if 'generator' in data else "" - model = str("\nModel: " + gen_type + data['model']) if 'model' in data else "" - custom_model = str("\nCustom Model: " + data['custom_model']) if 'custom_model' in data else "" - base_model = str("\nBase Model: " + data['base_model']) if 'base_model' in data else "" - decoder = str("\nDecoder: " + data['decoder']) if 'decoder' in data else "" - topk = str("\nTopk: " + data['topk']) if 'topk' in data else "" - topp = str("\nTopp: " + data['topp']) if 'topp' in data else "" - temperature = str("\nTemperature: " + data['temperature']) if 'temperature' in data else "" - cfg_coef = str("\nClassifier Free Guidance: " + data['cfg_coef']) if 'cfg_coef' in data else "" - version = str("Version: " + data['version']) if 'version' in data else "Version: Unknown" - info = str(version + global_prompt + bpm + key + scale + prompts + duration + overlap + seed + audio_mode + input_length + channel + sr_select + model + custom_model + base_model + decoder + topk + topp + temperature + cfg_coef) - if info == "": - return "No tags found. Either the file is not generated by MusicGen+ V1.2.7 and higher or the tags are corrupted." - return info - else: - return "Only .wav ,.mp4 and .json files are supported" - else: - return None - - -def info_to_params(audio_path): - if audio_path is not None: - if audio_path.name.endswith(".wav") or audio_path.name.endswith(".mp4") or audio_path.name.endswith(".json"): - if not audio_path.name.endswith(".json"): - with taglib.File(audio_path.name, save_on_exit=False) as song: - if 'COMMENT' not in song.tags: - return "Default", False, "", 120, "C", "Major", "large", None, "medium", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, "sample", 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - json_string = song.tags['COMMENT'][0] - data = json.loads(json_string) - struc_prompt = (False if data['bpm'] == "none" else True) if 'bpm' in data else False - global_prompt = data['global_prompt'] if 'global_prompt' in data else "" - bpm = (120 if data['bpm'] == "none" else int(data['bpm'])) if 'bpm' in data else 120 - key = ("C" if data['key'] == "none" else data['key']) if 'key' in data else "C" - scale = ("Major" if data['scale'] == "none" else data['scale']) if 'scale' in data else "Major" - model = data['model'] if 'model' in data else "large" - custom_model = (data['custom_model'] if data['custom_model'] in get_available_models() else None) if 'custom_model' in data else None - base_model = data['base_model'] if 'base_model' in data else "medium" - decoder = data['decoder'] if 'decoder' in data else "Default" - if 'texts' not in data: - unique_prompts = 1 - text = ["", "", "", "", "", "", "", "", "", ""] - repeat = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] - else: - s = data['texts'] - s = re.findall(r"'(.*?)'", s) - text = [] - repeat = [] - i = 0 - for elem in s: - if elem.strip(): - if i == 0 or elem != s[i-1]: - text.append(elem) - repeat.append(1) - else: - repeat[-1] += 1 - i += 1 - text.extend([""] * (10 - len(text))) - repeat.extend([1] * (10 - len(repeat))) - unique_prompts = len([t for t in text if t]) - audio_mode = ("sample" if data['audio_mode'] == "none" else data['audio_mode']) if 'audio_mode' in data else "sample" - duration = int(data['duration']) if 'duration' in data else 10 - topk = float(data['topk']) if 'topk' in data else 250 - topp = float(data['topp']) if 'topp' in data else 0 - temperature = float(data['temperature']) if 'temperature' in data else 1.0 - cfg_coef = float(data['cfg_coef']) if 'cfg_coef' in data else 5.0 - seed = int(data['seed']) if 'seed' in data else -1 - overlap = int(data['overlap']) if 'overlap' in data else 12 - channel = data['channel'] if 'channel' in data else "stereo" - sr_select = data['sr_select'] if 'sr_select' in data else "48000" - return decoder, struc_prompt, global_prompt, bpm, key, scale, model, custom_model, base_model, unique_prompts, text[0], text[1], text[2], text[3], text[4], text[5], text[6], text[7], text[8], text[9], repeat[0], repeat[1], repeat[2], repeat[3], repeat[4], repeat[5], repeat[6], repeat[7], repeat[8], repeat[9], audio_mode, duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select - else: - with open(audio_path.name) as json_file: - data = json.load(json_file) - struc_prompt = (False if data['bpm'] == "none" else True) if 'bpm' in data else False - global_prompt = data['global_prompt'] if 'global_prompt' in data else "" - bpm = (120 if data['bpm'] == "none" else int(data['bpm'])) if 'bpm' in data else 120 - key = ("C" if data['key'] == "none" else data['key']) if 'key' in data else "C" - scale = ("Major" if data['scale'] == "none" else data['scale']) if 'scale' in data else "Major" - model = data['model'] if 'model' in data else "large" - custom_model = (data['custom_model'] if data['custom_model'] in get_available_models() else None) if 'custom_model' in data else None - base_model = data['base_model'] if 'base_model' in data else "medium" - decoder = data['decoder'] if 'decoder' in data else "Default" - if 'texts' not in data: - unique_prompts = 1 - text = ["", "", "", "", "", "", "", "", "", ""] - repeat = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] - else: - s = data['texts'] - s = re.findall(r"'(.*?)'", s) - text = [] - repeat = [] - i = 0 - for elem in s: - if elem.strip(): - if i == 0 or elem != s[i-1]: - text.append(elem) - repeat.append(1) - else: - repeat[-1] += 1 - i += 1 - text.extend([""] * (10 - len(text))) - repeat.extend([1] * (10 - len(repeat))) - unique_prompts = len([t for t in text if t]) - audio_mode = ("sample" if data['audio_mode'] == "none" else data['audio_mode']) if 'audio_mode' in data else "sample" - duration = int(data['duration']) if 'duration' in data else 10 - topk = float(data['topk']) if 'topk' in data else 250 - topp = float(data['topp']) if 'topp' in data else 0 - temperature = float(data['temperature']) if 'temperature' in data else 1.0 - cfg_coef = float(data['cfg_coef']) if 'cfg_coef' in data else 5.0 - seed = int(data['seed']) if 'seed' in data else -1 - overlap = int(data['overlap']) if 'overlap' in data else 12 - channel = data['channel'] if 'channel' in data else "stereo" - sr_select = data['sr_select'] if 'sr_select' in data else "48000" - return decoder, struc_prompt, global_prompt, bpm, key, scale, model, custom_model, base_model, unique_prompts, text[0], text[1], text[2], text[3], text[4], text[5], text[6], text[7], text[8], text[9], repeat[0], repeat[1], repeat[2], repeat[3], repeat[4], repeat[5], repeat[6], repeat[7], repeat[8], repeat[9], audio_mode, duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select - else: - return "Default", False, "", 120, "C", "Major", "large", None, "medium", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, "sample", 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - else: - return "Default", False, "", 120, "C", "Major", "large", None, "medium", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, "sample", 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - - -def info_to_params_a(audio_path): - if audio_path is not None: - if audio_path.name.endswith(".wav") or audio_path.name.endswith(".mp4") or audio_path.name.endswith(".json"): - if not audio_path.name.endswith(".json"): - with taglib.File(audio_path.name, save_on_exit=False) as song: - if 'COMMENT' not in song.tags: - return "Default", False, "", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - json_string = song.tags['COMMENT'][0] - data = json.loads(json_string) - struc_prompt = (False if data['global_prompt'] == "" else True) if 'global_prompt' in data else False - global_prompt = data['global_prompt'] if 'global_prompt' in data else "" - decoder = data['decoder'] if 'decoder' in data else "Default" - if 'texts' not in data: - unique_prompts = 1 - text = ["", "", "", "", "", "", "", "", "", ""] - repeat = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] - else: - s = data['texts'] - s = re.findall(r"'(.*?)'", s) - text = [] - repeat = [] - i = 0 - for elem in s: - if elem.strip(): - if i == 0 or elem != s[i-1]: - text.append(elem) - repeat.append(1) - else: - repeat[-1] += 1 - i += 1 - text.extend([""] * (10 - len(text))) - repeat.extend([1] * (10 - len(repeat))) - unique_prompts = len([t for t in text if t]) - duration = int(data['duration']) if 'duration' in data else 10 - topk = float(data['topk']) if 'topk' in data else 250 - topp = float(data['topp']) if 'topp' in data else 0 - temperature = float(data['temperature']) if 'temperature' in data else 1.0 - cfg_coef = float(data['cfg_coef']) if 'cfg_coef' in data else 5.0 - seed = int(data['seed']) if 'seed' in data else -1 - overlap = int(data['overlap']) if 'overlap' in data else 12 - channel = data['channel'] if 'channel' in data else "stereo" - sr_select = data['sr_select'] if 'sr_select' in data else "48000" - return decoder, struc_prompt, global_prompt, unique_prompts, text[0], text[1], text[2], text[3], text[4], text[5], text[6], text[7], text[8], text[9], repeat[0], repeat[1], repeat[2], repeat[3], repeat[4], repeat[5], repeat[6], repeat[7], repeat[8], repeat[9], duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select - else: - with open(audio_path.name) as json_file: - data = json.load(json_file) - struc_prompt = (False if data['global_prompt'] == "" else True) if 'global_prompt' in data else False - global_prompt = data['global_prompt'] if 'global_prompt' in data else "" - decoder = data['decoder'] if 'decoder' in data else "Default" - if 'texts' not in data: - unique_prompts = 1 - text = ["", "", "", "", "", "", "", "", "", ""] - repeat = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] - else: - s = data['texts'] - s = re.findall(r"'(.*?)'", s) - text = [] - repeat = [] - i = 0 - for elem in s: - if elem.strip(): - if i == 0 or elem != s[i-1]: - text.append(elem) - repeat.append(1) - else: - repeat[-1] += 1 - i += 1 - text.extend([""] * (10 - len(text))) - repeat.extend([1] * (10 - len(repeat))) - unique_prompts = len([t for t in text if t]) - duration = int(data['duration']) if 'duration' in data else 10 - topk = float(data['topk']) if 'topk' in data else 250 - topp = float(data['topp']) if 'topp' in data else 0 - temperature = float(data['temperature']) if 'temperature' in data else 1.0 - cfg_coef = float(data['cfg_coef']) if 'cfg_coef' in data else 5.0 - seed = int(data['seed']) if 'seed' in data else -1 - overlap = int(data['overlap']) if 'overlap' in data else 12 - channel = data['channel'] if 'channel' in data else "stereo" - sr_select = data['sr_select'] if 'sr_select' in data else "48000" - return decoder, struc_prompt, global_prompt, unique_prompts, text[0], text[1], text[2], text[3], text[4], text[5], text[6], text[7], text[8], text[9], repeat[0], repeat[1], repeat[2], repeat[3], repeat[4], repeat[5], repeat[6], repeat[7], repeat[8], repeat[9], duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select - - else: - return "Default", False, "", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - else: - return "Default", False, "", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - - -def make_pseudo_stereo (filename, sr_select, pan, delay): - if pan: - temp = AudioSegment.from_wav(filename) - if sr_select != "32000": - temp = temp.set_frame_rate(int(sr_select)) - left = temp.pan(-0.5) - 5 - right = temp.pan(0.6) - 5 - temp = left.overlay(right, position=5) - temp.export(filename, format="wav") - if delay: - waveform, sample_rate = torchaudio.load(filename) # load mono WAV file - delay_seconds = 0.01 # set delay 10ms - delay_samples = int(delay_seconds * sample_rate) # Calculating delay value in number of samples - stereo_waveform = torch.stack([waveform[0], torch.cat((torch.zeros(delay_samples), waveform[0][:-delay_samples]))]) # Generate a stereo file with original mono audio and delayed version - torchaudio.save(filename, stereo_waveform, sample_rate) - return - - -def normalize_audio(audio_data): - audio_data = audio_data.astype(np.float32) - max_value = np.max(np.abs(audio_data)) - audio_data /= max_value - return audio_data - - -def load_diffusion(): - global MBD - if MBD is None: - print("loading MBD") - MBD = MultiBandDiffusion.get_mbd_musicgen() - - -def unload_diffusion(): - global MBD - if MBD is not None: - print("unloading MBD") - MBD = None - - -def _do_predictions(gen_type, texts, melodies, sample, trim_start, trim_end, duration, image, height, width, background, bar1, bar2, channel, sr_select, progress=False, **gen_kwargs): - if gen_type == "music": - maximum_size = 29.5 - elif gen_type == "audio": - maximum_size = 9.5 - cut_size = 0 - input_length = 0 - sampleP = None - if sample is not None: - globalSR, sampleM = sample[0], sample[1] - sampleM = normalize_audio(sampleM) - sampleM = torch.from_numpy(sampleM).t() - if sampleM.dim() == 1: - sampleM = sampleM.unsqueeze(0) - sample_length = sampleM.shape[sampleM.dim() - 1] / globalSR - if trim_start >= sample_length: - trim_start = sample_length - 0.5 - if trim_end >= sample_length: - trim_end = sample_length - 0.5 - if trim_start + trim_end >= sample_length: - tmp = sample_length - 0.5 - trim_start = tmp / 2 - trim_end = tmp / 2 - sampleM = sampleM[..., int(globalSR * trim_start):int(globalSR * (sample_length - trim_end))] - sample_length = sample_length - (trim_start + trim_end) - if sample_length > maximum_size: - cut_size = sample_length - maximum_size - sampleP = sampleM[..., :int(globalSR * cut_size)] - sampleM = sampleM[..., int(globalSR * cut_size):] - if sample_length >= duration: - duration = sample_length + 0.5 - input_length = sample_length - global MODEL - MODEL.set_generation_params(duration=(duration - cut_size), **gen_kwargs) - print("new batch", len(texts), texts, [None if m is None else (m[0], m[1].shape) for m in melodies], [None if sample is None else (sample[0], sample[1].shape)]) - be = time.time() - processed_melodies = [] - if gen_type == "music": - target_sr = 32000 - elif gen_type == "audio": - target_sr = 16000 - target_ac = 1 - - for melody in melodies: - if melody is None: - processed_melodies.append(None) - else: - sr, melody = melody[0], torch.from_numpy(melody[1]).to(MODEL.device).float().t() - if melody.dim() == 1: - melody = melody[None] - melody = melody[..., :int(sr * duration)] - melody = convert_audio(melody, sr, target_sr, target_ac) - processed_melodies.append(melody) - - if sample is not None: - if sampleP is None: - if gen_type == "music": - outputs = MODEL.generate_continuation( - prompt=sampleM, - prompt_sample_rate=globalSR, - descriptions=texts, - progress=progress, - return_tokens=USE_DIFFUSION - ) - elif gen_type == "audio": - outputs = MODEL.generate_continuation( - prompt=sampleM, - prompt_sample_rate=globalSR, - descriptions=texts, - progress=progress - ) - else: - if sampleP.dim() > 1: - sampleP = convert_audio(sampleP, globalSR, target_sr, target_ac) - sampleP = sampleP.to(MODEL.device).float().unsqueeze(0) - if gen_type == "music": - outputs = MODEL.generate_continuation( - prompt=sampleM, - prompt_sample_rate=globalSR, - descriptions=texts, - progress=progress, - return_tokens=USE_DIFFUSION - ) - elif gen_type == "audio": - outputs = MODEL.generate_continuation( - prompt=sampleM, - prompt_sample_rate=globalSR, - descriptions=texts, - progress=progress - ) - outputs = torch.cat([sampleP, outputs], 2) - - elif any(m is not None for m in processed_melodies): - if gen_type == "music": - outputs = MODEL.generate_with_chroma( - descriptions=texts, - melody_wavs=processed_melodies, - melody_sample_rate=target_sr, - progress=progress, - return_tokens=USE_DIFFUSION - ) - elif gen_type == "audio": - outputs = MODEL.generate_with_chroma( - descriptions=texts, - melody_wavs=processed_melodies, - melody_sample_rate=target_sr, - progress=progress - ) - else: - if gen_type == "music": - outputs = MODEL.generate(texts, progress=progress, return_tokens=USE_DIFFUSION) - elif gen_type == "audio": - outputs = MODEL.generate(texts, progress=progress) - - if USE_DIFFUSION: - print("outputs: " + str(outputs)) - outputs_diffusion = MBD.tokens_to_wav(outputs[1]) - outputs = torch.cat([outputs[0], outputs_diffusion], dim=0) - outputs = outputs.detach().cpu().float() - backups = outputs - if channel == "stereo": - outputs = convert_audio(outputs, target_sr, int(sr_select), 2) - elif channel == "mono" and sr_select != "32000": - outputs = convert_audio(outputs, target_sr, int(sr_select), 1) - out_files = [] - out_audios = [] - out_backup = [] - for output in outputs: - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, output, (MODEL.sample_rate if channel == "stereo effect" else int(sr_select)), strategy="loudness", - loudness_headroom_db=16, loudness_compressor=True, add_suffix=False) - - if channel == "stereo effect": - make_pseudo_stereo(file.name, sr_select, pan=True, delay=True); - - out_files.append(pool.submit(make_waveform, file.name, bg_image=image, bg_color=background, bars_color=(bar1, bar2), fg_alpha=1.0, bar_count=75, height=height, width=width)) - out_audios.append(file.name) - file_cleaner.add(file.name) - print(f'wav: {file.name}') - for backup in backups: - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, backup, MODEL.sample_rate, strategy="loudness", - loudness_headroom_db=16, loudness_compressor=True, add_suffix=False) - out_backup.append(file.name) - file_cleaner.add(file.name) - res = [out_file.result() for out_file in out_files] - res_audio = out_audios - res_backup = out_backup - for file in res: - file_cleaner.add(file) - print(f'video: {file}') - print("batch finished", len(texts), time.time() - be) - print("Tempfiles currently stored: ", len(file_cleaner.files)) - if MOVE_TO_CPU: - MODEL.to('cpu') - if UNLOAD_MODEL: - MODEL = None - torch.cuda.empty_cache() - torch.cuda.ipc_collect() - return res, res_audio, res_backup, input_length - - -def predict_batched(texts, melodies): - max_text_length = 512 - texts = [text[:max_text_length] for text in texts] - load_model('melody') - res = _do_predictions(texts, melodies, BATCHED_DURATION) - return res - - -def add_tags(filename, tags): - json_string = None - - data = { - "global_prompt": tags[0], - "bpm": tags[1], - "key": tags[2], - "scale": tags[3], - "texts": tags[4], - "duration": tags[5], - "overlap": tags[6], - "seed": tags[7], - "audio_mode": tags[8], - "input_length": tags[9], - "channel": tags[10], - "sr_select": tags[11], - "model": tags[12], - "custom_model": tags[13], - "base_model": tags[14], - "decoder": tags[15], - "topk": tags[16], - "topp": tags[17], - "temperature": tags[18], - "cfg_coef": tags[19], - "generator": tags[20], - "version": version - } - - json_string = json.dumps(data) - - if os.path.exists(filename): - with taglib.File(filename, save_on_exit=True) as song: - song.tags = {'COMMENT': json_string } - - json_file = open(tags[7] + '.json', 'w') - json_file.write(json_string) - json_file.close() - - return json_file.name; - - -def save_outputs(mp4, wav_tmp, tags, gen_type): - # mp4: .mp4 file name in root running folder of app.py - # wav_tmp: temporary wav file located in %TEMP% folder - # seed - used seed - # exanple BgnJtr4Pn1AJ.mp4, C:\Users\Alex\AppData\Local\Temp\tmp4ermrebs.wav, 195123182343465 - # procedure read generated .mp4 and wav files, rename it by using seed as name, - # and will store it to ./output/today_date/wav and ./output/today_date/mp4 folders. - # if file with same seed number already exist its make postfix in name like seed(n) - # where is n - consiqunce number 1-2-3-4 and so on - # then we store generated mp4 and wav into destination folders. - - current_date = datetime.now().strftime("%Y%m%d") - wav_directory = os.path.join(os.getcwd(), 'output', current_date, gen_type,'wav') - mp4_directory = os.path.join(os.getcwd(), 'output', current_date, gen_type,'mp4') - json_directory = os.path.join(os.getcwd(), 'output', current_date, gen_type,'json') - os.makedirs(wav_directory, exist_ok=True) - os.makedirs(mp4_directory, exist_ok=True) - os.makedirs(json_directory, exist_ok=True) - - filename = str(tags[7]) + '.wav' - target = os.path.join(wav_directory, filename) - counter = 1 - while os.path.exists(target): - filename = str(tags[7]) + f'({counter})' + '.wav' - target = os.path.join(wav_directory, filename) - counter += 1 - - shutil.copyfile(wav_tmp, target); # make copy of original file - json_file = add_tags(target, tags); - - wav_target=target; - target=target.replace('wav', 'mp4'); - mp4_target=target; - - mp4=r'./' +mp4; - shutil.copyfile(mp4, target); # make copy of original file - _ = add_tags(target, tags); - - target=target.replace('mp4', 'json'); # change the extension to json - json_target=target; # store the json target - - with open(target, 'w') as f: # open a writable file object - shutil.copyfile(json_file, target); # make copy of original file - - os.remove(json_file) - - return wav_target, mp4_target, json_target; - - -def clear_cash(): - # delete all temporary files genegated my system - current_date = datetime.now().date() - current_directory = os.getcwd() - files = glob.glob(os.path.join(current_directory, '*.mp4')) - for file in files: - creation_date = datetime.fromtimestamp(os.path.getctime(file)).date() - if creation_date == current_date: - os.remove(file) - - temp_directory = os.environ.get('TEMP') - files = glob.glob(os.path.join(temp_directory, 'tmp*.mp4')) - for file in files: - creation_date = datetime.fromtimestamp(os.path.getctime(file)).date() - if creation_date == current_date: - os.remove(file) - - files = glob.glob(os.path.join(temp_directory, 'tmp*.wav')) - for file in files: - creation_date = datetime.fromtimestamp(os.path.getctime(file)).date() - if creation_date == current_date: - os.remove(file) - - files = glob.glob(os.path.join(temp_directory, 'tmp*.png')) - for file in files: - creation_date = datetime.fromtimestamp(os.path.getctime(file)).date() - if creation_date == current_date: - os.remove(file) - return - - -def s2t(seconds, seconds2): - # convert seconds to time format - # seconds - time in seconds - # return time in format 00:00 - m, s = divmod(seconds, 60) - m2, s2 = divmod(seconds2, 60) - if seconds != 0 and seconds < seconds2: - s = s + 1 - return ("%02d:%02d - %02d:%02d" % (m, s, m2, s2)) - - -def calc_time(gen_type, s, duration, overlap, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9): - # calculate the time of generation - # overlap - overlap in seconds - # d0-d9 - drag - # return time in seconds - d_amount = [int(d0), int(d1), int(d2), int(d3), int(d4), int(d5), int(d6), int(d7), int(d8), int(d9)] - calc = [] - tracks = [] - time = 0 - s = s - 1 - max_time = duration - max_limit = 0 - if gen_type == "music": - max_limit = 30 - elif gen_type == "audio": - max_limit = 10 - track_add = max_limit - overlap - tracks.append(max_limit + ((d_amount[0] - 1) * track_add)) - for i in range(1, 10): - tracks.append(d_amount[i] * track_add) - - if tracks[0] >= max_time or s == 0: - calc.append(s2t(time, max_time)) - time = max_time - else: - calc.append(s2t(time, tracks[0])) - time = tracks[0] - - for i in range(1, 10): - if time + tracks[i] >= max_time or i == s: - calc.append(s2t(time, max_time)) - time = max_time - else: - calc.append(s2t(time, time + tracks[i])) - time = time + tracks[i] - - return calc[0], calc[1], calc[2], calc[3], calc[4], calc[5], calc[6], calc[7], calc[8], calc[9] - - -def predict_full(gen_type, model, decoder, custom_model, base_model, prompt_amount, struc_prompt, bpm, key, scale, global_prompt, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, audio, mode, trim_start, trim_end, duration, topk, topp, temperature, cfg_coef, seed, overlap, image, height, width, background, bar1, bar2, channel, sr_select, progress=gr.Progress()): - global INTERRUPTING - global USE_DIFFUSION - INTERRUPTING = False - - if gen_type == "audio": - custom_model = None - base_model = "medium" - - if temperature < 0: - raise gr.Error("Temperature must be >= 0.") - if topk < 0: - raise gr.Error("Topk must be non-negative.") - if topp < 0: - raise gr.Error("Topp must be non-negative.") - - if trim_start < 0: - trim_start = 0 - if trim_end < 0: - trim_end = 0 - - topk = int(topk) - - if decoder == "MultiBand_Diffusion": - USE_DIFFUSION = True - load_diffusion() - else: - USE_DIFFUSION = False - unload_diffusion() - - if gen_type == "music": - model_shrt = model - model = "GrandaddyShmax/musicgen-" + model - elif gen_type == "audio": - model_shrt = model - model = "GrandaddyShmax/audiogen-" + model - base_model_shrt = base_model - base_model = "GrandaddyShmax/musicgen-" + base_model - - if MODEL is None or MODEL.name != (model): - load_model(model, custom_model, base_model, gen_type) - else: - if MOVE_TO_CPU: - MODEL.to('cuda') - - if seed < 0: - seed = random.randint(0, 0xffff_ffff_ffff) - torch.manual_seed(seed) - - def _progress(generated, to_generate): - progress((min(generated, to_generate), to_generate)) - if INTERRUPTING: - raise gr.Error("Interrupted.") - MODEL.set_custom_progress_callback(_progress) - - audio_mode = "none" - melody = None - sample = None - if audio: - audio_mode = mode - if mode == "sample": - sample = audio - elif mode == "melody": - melody = audio - - base_model = "none" if model != "custom" else base_model - custom_model = "none" if model != "custom" else custom_model - - text_cat = [p0, p1, p2, p3, p4, p5, p6, p7, p8, p9] - drag_cat = [d0, d1, d2, d3, d4, d5, d6, d7, d8, d9] - texts = [] - raw_texts = [] - ind = 0 - ind2 = 0 - while ind < prompt_amount: - for ind2 in range(int(drag_cat[ind])): - if not struc_prompt: - texts.append(text_cat[ind]) - global_prompt = "none" - bpm = "none" - key = "none" - scale = "none" - raw_texts.append(text_cat[ind]) - else: - if gen_type == "music": - bpm_str = str(bpm) + " bpm" - key_str = ", " + str(key) + " " + str(scale) - global_str = (", " + str(global_prompt)) if str(global_prompt) != "" else "" - elif gen_type == "audio": - bpm_str = "" - key_str = "" - global_str = (str(global_prompt)) if str(global_prompt) != "" else "" - texts_str = (", " + str(text_cat[ind])) if str(text_cat[ind]) != "" else "" - texts.append(bpm_str + key_str + global_str + texts_str) - raw_texts.append(text_cat[ind]) - ind2 = 0 - ind = ind + 1 - - outs, outs_audio, outs_backup, input_length = _do_predictions( - gen_type, [texts], [melody], sample, trim_start, trim_end, duration, image, height, width, background, bar1, bar2, channel, sr_select, progress=True, - top_k=topk, top_p=topp, temperature=temperature, cfg_coef=cfg_coef, extend_stride=MODEL.max_duration-overlap) - tags = [str(global_prompt), str(bpm), str(key), str(scale), str(raw_texts), str(duration), str(overlap), str(seed), str(audio_mode), str(input_length), str(channel), str(sr_select), str(model_shrt), str(custom_model), str(base_model_shrt), str(decoder), str(topk), str(topp), str(temperature), str(cfg_coef), str(gen_type)] - wav_target, mp4_target, json_target = save_outputs(outs[0], outs_audio[0], tags, gen_type); - # Removes the temporary files. - for out in outs: - os.remove(out) - for out in outs_audio: - os.remove(out) - - return mp4_target, wav_target, outs_backup[0], [mp4_target, wav_target, json_target], seed - - -max_textboxes = 10 - - -def get_available_models(): - return sorted([re.sub('.pt$', '', item.name) for item in list(Path('models/').glob('*')) if item.name.endswith('.pt')]) - - -def toggle_audio_src(choice): - if choice == "mic": - return gr.update(source="microphone", value=None, label="Microphone") - else: - return gr.update(source="upload", value=None, label="File") - - -def ui_full(launch_kwargs): - with gr.Blocks(title='AudioCraft Plus', theme=theme) as interface: - gr.Markdown( - """ - # AudioCraft Plus - v2.0.0a - - ### An All-in-One AudioCraft WebUI - - #### **Disclaimer:** This will not run on CPU only. Its best to clone this App and run on GPU instance! - **Alternatively**, you can run this for free on a google colab: - https://colab.research.google.com/github/camenduru/MusicGen-colab/blob/main/MusicGen_ClownOfMadness_plus_colab.ipynb - - **Or**, run this locally on your PC: - https://github.com/GrandaddyShmax/audiocraft_plus/tree/main - - Thanks to: facebookresearch, Camenduru, rkfg, oobabooga, AlexHK and GrandaddyShmax - """ - ) - with gr.Tab("MusicGen"): - gr.Markdown( - """ - ### MusicGen - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Tab("Generation"): - with gr.Accordion("Structure Prompts", open=False): - with gr.Column(): - with gr.Row(): - struc_prompts = gr.Checkbox(label="Enable", value=False, interactive=True, container=False) - bpm = gr.Number(label="BPM", value=120, interactive=True, scale=1, precision=0) - key = gr.Dropdown(["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "Bb", "B"], label="Key", value="C", interactive=True) - scale = gr.Dropdown(["Major", "Minor"], label="Scale", value="Major", interactive=True) - with gr.Row(): - global_prompt = gr.Text(label="Global Prompt", interactive=True, scale=3) - with gr.Row(): - s = gr.Slider(1, max_textboxes, value=1, step=1, label="Prompts:", interactive=True, scale=2) - #s_mode = gr.Radio(["segmentation", "batch"], value="segmentation", interactive=True, scale=1, label="Generation Mode") - with gr.Column(): - textboxes = [] - prompts = [] - repeats = [] - calcs = [] - with gr.Row(): - text0 = gr.Text(label="Input Text", interactive=True, scale=4) - prompts.append(text0) - drag0 = gr.Number(label="Repeat", value=1, interactive=True, scale=1) - repeats.append(drag0) - calc0 = gr.Text(interactive=False, value="00:00 - 00:00", scale=1, label="Time") - calcs.append(calc0) - for i in range(max_textboxes): - with gr.Row(visible=False) as t: - text = gr.Text(label="Input Text", interactive=True, scale=3) - repeat = gr.Number(label="Repeat", minimum=1, value=1, interactive=True, scale=1) - calc = gr.Text(interactive=False, value="00:00 - 00:00", scale=1, label="Time") - textboxes.append(t) - prompts.append(text) - repeats.append(repeat) - calcs.append(calc) - to_calc = gr.Button("Calculate Timings", variant="secondary") - with gr.Row(): - duration = gr.Slider(minimum=1, maximum=300, value=10, step=1, label="Duration", interactive=True) - with gr.Row(): - overlap = gr.Slider(minimum=1, maximum=29, value=12, step=1, label="Overlap", interactive=True) - with gr.Row(): - seed = gr.Number(label="Seed", value=-1, scale=4, precision=0, interactive=True) - gr.Button('\U0001f3b2\ufe0f', scale=1).click(fn=lambda: -1, outputs=[seed], queue=False) - reuse_seed = gr.Button('\u267b\ufe0f', scale=1) - - with gr.Tab("Audio"): - with gr.Row(): - with gr.Column(): - input_type = gr.Radio(["file", "mic"], value="file", label="Input Type (optional)", interactive=True) - mode = gr.Radio(["melody", "sample"], label="Input Audio Mode (optional)", value="sample", interactive=True) - with gr.Row(): - trim_start = gr.Number(label="Trim Start", value=0, interactive=True) - trim_end = gr.Number(label="Trim End", value=0, interactive=True) - audio = gr.Audio(source="upload", type="numpy", label="Input Audio (optional)", interactive=True) - - with gr.Tab("Customization"): - with gr.Row(): - with gr.Column(): - background = gr.ColorPicker(value="#0f0f0f", label="background color", interactive=True, scale=0) - bar1 = gr.ColorPicker(value="#84cc16", label="bar color start", interactive=True, scale=0) - bar2 = gr.ColorPicker(value="#10b981", label="bar color end", interactive=True, scale=0) - with gr.Column(): - image = gr.Image(label="Background Image", type="filepath", interactive=True, scale=4) - with gr.Row(): - height = gr.Number(label="Height", value=512, interactive=True) - width = gr.Number(label="Width", value=768, interactive=True) - - with gr.Tab("Settings"): - with gr.Row(): - channel = gr.Radio(["mono", "stereo", "stereo effect"], label="Output Audio Channels", value="stereo", interactive=True, scale=1) - sr_select = gr.Dropdown(["11025", "16000", "22050", "24000", "32000", "44100", "48000"], label="Output Audio Sample Rate", value="48000", interactive=True) - with gr.Row(): - model = gr.Radio(["melody", "small", "medium", "large", "custom"], label="Model", value="large", interactive=True, scale=1) - with gr.Column(): - dropdown = gr.Dropdown(choices=get_available_models(), value=("No models found" if len(get_available_models()) < 1 else get_available_models()[0]), label='Custom Model (models folder)', elem_classes='slim-dropdown', interactive=True) - ui.create_refresh_button(dropdown, lambda: None, lambda: {'choices': get_available_models()}, 'refresh-button') - basemodel = gr.Radio(["small", "medium", "melody", "large"], label="Base Model", value="medium", interactive=True, scale=1) - with gr.Row(): - decoder = gr.Radio(["Default", "MultiBand_Diffusion"], label="Decoder", value="Default", interactive=True) - with gr.Row(): - topk = gr.Number(label="Top-k", value=250, interactive=True) - topp = gr.Number(label="Top-p", value=0, interactive=True) - temperature = gr.Number(label="Temperature", value=1.0, interactive=True) - cfg_coef = gr.Number(label="Classifier Free Guidance", value=3.0, interactive=True) - with gr.Row(): - submit = gr.Button("Generate", variant="primary") - # Adapted from https://github.com/rkfg/audiocraft/blob/long/app.py, MIT license. - _ = gr.Button("Interrupt").click(fn=interrupt, queue=False) - with gr.Column() as c: - with gr.Tab("Output"): - output = gr.Video(label="Generated Music", scale=0) - with gr.Row(): - audio_only = gr.Audio(type="numpy", label="Audio Only", interactive=False) - backup_only = gr.Audio(type="numpy", label="Backup Audio", interactive=False, visible=False) - send_audio = gr.Button("Send to Input Audio") - seed_used = gr.Number(label='Seed used', value=-1, interactive=False) - download = gr.File(label="Generated Files", interactive=False) - with gr.Tab("Wiki"): - gr.Markdown( - """ - - **[Generate (button)]:** - Generates the music with the given settings and prompts. - - - **[Interrupt (button)]:** - Stops the music generation as soon as it can, providing an incomplete output. - - --- - - ### Generation Tab: - - #### Structure Prompts: - - This feature helps reduce repetetive prompts by allowing you to set global prompts - that will be used for all prompt segments. - - - **[Structure Prompts (checkbox)]:** - Enable/Disable the structure prompts feature. - - - **[BPM (number)]:** - Beats per minute of the generated music. - - - **[Key (dropdown)]:** - The key of the generated music. - - - **[Scale (dropdown)]:** - The scale of the generated music. - - - **[Global Prompt (text)]:** - Here write the prompt that you wish to be used for all prompt segments. - - #### Multi-Prompt: - - This feature allows you to control the music, adding variation to different time segments. - You have up to 10 prompt segments. the first prompt will always be 30s long - the other prompts will be [30s - overlap]. - for example if the overlap is 10s, each prompt segment will be 20s. - - - **[Prompt Segments (number)]:** - Amount of unique prompt to generate throughout the music generation. - - - **[Prompt/Input Text (prompt)]:** - Here describe the music you wish the model to generate. - - - **[Repeat (number)]:** - Write how many times this prompt will repeat (instead of wasting another prompt segment on the same prompt). - - - **[Time (text)]:** - The time of the prompt segment. - - - **[Calculate Timings (button)]:** - Calculates the timings of the prompt segments. - - - **[Duration (number)]:** - How long you want the generated music to be (in seconds). - - - **[Overlap (number)]:** - How much each new segment will reference the previous segment (in seconds). - For example, if you choose 20s: Each new segment after the first one will reference the previous segment 20s - and will generate only 10s of new music. The model can only process 30s of music. - - - **[Seed (number)]:** - Your generated music id. If you wish to generate the exact same music, - place the exact seed with the exact prompts - (This way you can also extend specific song that was generated short). - - - **[Random Seed (button)]:** - Gives "-1" as a seed, which counts as a random seed. - - - **[Copy Previous Seed (button)]:** - Copies the seed from the output seed (if you don't feel like doing it manualy). - - --- - - ### Audio Tab: - - - **[Input Type (selection)]:** - `File` mode allows you to upload an audio file to use as input - `Mic` mode allows you to use your microphone as input - - - **[Input Audio Mode (selection)]:** - `Melody` mode only works with the melody model: it conditions the music generation to reference the melody - `Sample` mode works with any model: it gives a music sample to the model to generate its continuation. - - - **[Trim Start and Trim End (numbers)]:** - `Trim Start` set how much you'd like to trim the input audio from the start - `Trim End` same as the above but from the end - - - **[Input Audio (audio file)]:** - Input here the audio you wish to use with "melody" or "sample" mode. - - --- - - ### Customization Tab: - - - **[Background Color (color)]:** - Works only if you don't upload image. Color of the background of the waveform. - - - **[Bar Color Start (color)]:** - First color of the waveform bars. - - - **[Bar Color End (color)]:** - Second color of the waveform bars. - - - **[Background Image (image)]:** - Background image that you wish to be attached to the generated video along with the waveform. - - - **[Height and Width (numbers)]:** - Output video resolution, only works with image. - (minimum height and width is 256). - - --- - - ### Settings Tab: - - - **[Output Audio Channels (selection)]:** - With this you can select the amount of channels that you wish for your output audio. - `mono` is a straightforward single channel audio - `stereo` is a dual channel audio but it will sound more or less like mono - `stereo effect` this one is also dual channel but uses tricks to simulate a stereo audio. - - - **[Output Audio Sample Rate (dropdown)]:** - The output audio sample rate, the model default is 32000. - - - **[Model (selection)]:** - Here you can choose which model you wish to use: - `melody` model is based on the medium model with a unique feature that lets you use melody conditioning - `small` model is trained on 300M parameters - `medium` model is trained on 1.5B parameters - `large` model is trained on 3.3B parameters - `custom` model runs the custom model that you provided. - - - **[Custom Model (selection)]:** - This dropdown will show you models that are placed in the `models` folder - you must select `custom` in the model options in order to use it. - - - **[Refresh (button)]:** - Refreshes the dropdown list for custom model. - - - **[Base Model (selection)]:** - Choose here the model that your custom model is based on. - - - **[Decoder (selection)]:** - Choose here the decoder that you wish to use: - `Default` is the default decoder - `MultiBand_Diffusion` is a decoder that uses diffusion to generate the audio. - - - **[Top-k (number)]:** - is a parameter used in text generation models, including music generation models. It determines the number of most likely next tokens to consider at each step of the generation process. The model ranks all possible tokens based on their predicted probabilities, and then selects the top-k tokens from the ranked list. The model then samples from this reduced set of tokens to determine the next token in the generated sequence. A smaller value of k results in a more focused and deterministic output, while a larger value of k allows for more diversity in the generated music. - - - **[Top-p (number)]:** - also known as nucleus sampling or probabilistic sampling, is another method used for token selection during text generation. Instead of specifying a fixed number like top-k, top-p considers the cumulative probability distribution of the ranked tokens. It selects the smallest possible set of tokens whose cumulative probability exceeds a certain threshold (usually denoted as p). The model then samples from this set to choose the next token. This approach ensures that the generated output maintains a balance between diversity and coherence, as it allows for a varying number of tokens to be considered based on their probabilities. - - - **[Temperature (number)]:** - is a parameter that controls the randomness of the generated output. It is applied during the sampling process, where a higher temperature value results in more random and diverse outputs, while a lower temperature value leads to more deterministic and focused outputs. In the context of music generation, a higher temperature can introduce more variability and creativity into the generated music, but it may also lead to less coherent or structured compositions. On the other hand, a lower temperature can produce more repetitive and predictable music. - - - **[Classifier Free Guidance (number)]:** - refers to a technique used in some music generation models where a separate classifier network is trained to provide guidance or control over the generated music. This classifier is trained on labeled data to recognize specific musical characteristics or styles. During the generation process, the output of the generator model is evaluated by the classifier, and the generator is encouraged to produce music that aligns with the desired characteristics or style. This approach allows for more fine-grained control over the generated music, enabling users to specify certain attributes they want the model to capture. - """ - ) - with gr.Tab("AudioGen"): - gr.Markdown( - """ - ### AudioGen - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Tab("Generation"): - with gr.Accordion("Structure Prompts", open=False): - with gr.Row(): - struc_prompts_a = gr.Checkbox(label="Enable", value=False, interactive=True, container=False) - global_prompt_a = gr.Text(label="Global Prompt", interactive=True, scale=3) - with gr.Row(): - s_a = gr.Slider(1, max_textboxes, value=1, step=1, label="Prompts:", interactive=True, scale=2) - with gr.Column(): - textboxes_a = [] - prompts_a = [] - repeats_a = [] - calcs_a = [] - with gr.Row(): - text0_a = gr.Text(label="Input Text", interactive=True, scale=4) - prompts_a.append(text0_a) - drag0_a = gr.Number(label="Repeat", value=1, interactive=True, scale=1) - repeats_a.append(drag0_a) - calc0_a = gr.Text(interactive=False, value="00:00 - 00:00", scale=1, label="Time") - calcs_a.append(calc0_a) - for i in range(max_textboxes): - with gr.Row(visible=False) as t_a: - text_a = gr.Text(label="Input Text", interactive=True, scale=3) - repeat_a = gr.Number(label="Repeat", minimum=1, value=1, interactive=True, scale=1) - calc_a = gr.Text(interactive=False, value="00:00 - 00:00", scale=1, label="Time") - textboxes_a.append(t_a) - prompts_a.append(text_a) - repeats_a.append(repeat_a) - calcs_a.append(calc_a) - to_calc_a = gr.Button("Calculate Timings", variant="secondary") - with gr.Row(): - duration_a = gr.Slider(minimum=1, maximum=300, value=10, step=1, label="Duration", interactive=True) - with gr.Row(): - overlap_a = gr.Slider(minimum=1, maximum=9, value=2, step=1, label="Overlap", interactive=True) - with gr.Row(): - seed_a = gr.Number(label="Seed", value=-1, scale=4, precision=0, interactive=True) - gr.Button('\U0001f3b2\ufe0f', scale=1).click(fn=lambda: -1, outputs=[seed_a], queue=False) - reuse_seed_a = gr.Button('\u267b\ufe0f', scale=1) - - with gr.Tab("Audio"): - with gr.Row(): - with gr.Column(): - input_type_a = gr.Radio(["file", "mic"], value="file", label="Input Type (optional)", interactive=True) - mode_a = gr.Radio(["sample"], label="Input Audio Mode (optional)", value="sample", interactive=False, visible=False) - with gr.Row(): - trim_start_a = gr.Number(label="Trim Start", value=0, interactive=True) - trim_end_a = gr.Number(label="Trim End", value=0, interactive=True) - audio_a = gr.Audio(source="upload", type="numpy", label="Input Audio (optional)", interactive=True) - - with gr.Tab("Customization"): - with gr.Row(): - with gr.Column(): - background_a = gr.ColorPicker(value="#0f0f0f", label="background color", interactive=True, scale=0) - bar1_a = gr.ColorPicker(value="#84cc16", label="bar color start", interactive=True, scale=0) - bar2_a = gr.ColorPicker(value="#10b981", label="bar color end", interactive=True, scale=0) - with gr.Column(): - image_a = gr.Image(label="Background Image", type="filepath", interactive=True, scale=4) - with gr.Row(): - height_a = gr.Number(label="Height", value=512, interactive=True) - width_a = gr.Number(label="Width", value=768, interactive=True) - - with gr.Tab("Settings"): - with gr.Row(): - channel_a = gr.Radio(["mono", "stereo", "stereo effect"], label="Output Audio Channels", value="stereo", interactive=True, scale=1) - sr_select_a = gr.Dropdown(["11025", "16000", "22050", "24000", "32000", "44100", "48000"], label="Output Audio Sample Rate", value="48000", interactive=True) - with gr.Row(): - model_a = gr.Radio(["medium"], label="Model", value="medium", interactive=False, visible=False) - decoder_a = gr.Radio(["Default"], label="Decoder", value="Default", interactive=False, visible=False) - with gr.Row(): - topk_a = gr.Number(label="Top-k", value=250, interactive=True) - topp_a = gr.Number(label="Top-p", value=0, interactive=True) - temperature_a = gr.Number(label="Temperature", value=1.0, interactive=True) - cfg_coef_a = gr.Number(label="Classifier Free Guidance", value=3.0, interactive=True) - with gr.Row(): - submit_a = gr.Button("Generate", variant="primary") - _ = gr.Button("Interrupt").click(fn=interrupt, queue=False) - with gr.Column(): - with gr.Tab("Output"): - output_a = gr.Video(label="Generated Audio", scale=0) - with gr.Row(): - audio_only_a = gr.Audio(type="numpy", label="Audio Only", interactive=False) - backup_only_a = gr.Audio(type="numpy", label="Backup Audio", interactive=False, visible=False) - send_audio_a = gr.Button("Send to Input Audio") - seed_used_a = gr.Number(label='Seed used', value=-1, interactive=False) - download_a = gr.File(label="Generated Files", interactive=False) - with gr.Tab("Wiki"): - gr.Markdown( - """ - - **[Generate (button)]:** - Generates the audio with the given settings and prompts. - - - **[Interrupt (button)]:** - Stops the audio generation as soon as it can, providing an incomplete output. - - --- - - ### Generation Tab: - - #### Structure Prompts: - - This feature helps reduce repetetive prompts by allowing you to set global prompts - that will be used for all prompt segments. - - - **[Structure Prompts (checkbox)]:** - Enable/Disable the structure prompts feature. - - - **[Global Prompt (text)]:** - Here write the prompt that you wish to be used for all prompt segments. - - #### Multi-Prompt: - - This feature allows you to control the audio, adding variation to different time segments. - You have up to 10 prompt segments. the first prompt will always be 10s long - the other prompts will be [10s - overlap]. - for example if the overlap is 2s, each prompt segment will be 8s. - - - **[Prompt Segments (number)]:** - Amount of unique prompt to generate throughout the audio generation. - - - **[Prompt/Input Text (prompt)]:** - Here describe the audio you wish the model to generate. - - - **[Repeat (number)]:** - Write how many times this prompt will repeat (instead of wasting another prompt segment on the same prompt). - - - **[Time (text)]:** - The time of the prompt segment. - - - **[Calculate Timings (button)]:** - Calculates the timings of the prompt segments. - - - **[Duration (number)]:** - How long you want the generated audio to be (in seconds). - - - **[Overlap (number)]:** - How much each new segment will reference the previous segment (in seconds). - For example, if you choose 2s: Each new segment after the first one will reference the previous segment 2s - and will generate only 8s of new audio. The model can only process 10s of music. - - - **[Seed (number)]:** - Your generated audio id. If you wish to generate the exact same audio, - place the exact seed with the exact prompts - (This way you can also extend specific song that was generated short). - - - **[Random Seed (button)]:** - Gives "-1" as a seed, which counts as a random seed. - - - **[Copy Previous Seed (button)]:** - Copies the seed from the output seed (if you don't feel like doing it manualy). - - --- - - ### Audio Tab: - - - **[Input Type (selection)]:** - `File` mode allows you to upload an audio file to use as input - `Mic` mode allows you to use your microphone as input - - - **[Trim Start and Trim End (numbers)]:** - `Trim Start` set how much you'd like to trim the input audio from the start - `Trim End` same as the above but from the end - - - **[Input Audio (audio file)]:** - Input here the audio you wish to use. - - --- - - ### Customization Tab: - - - **[Background Color (color)]:** - Works only if you don't upload image. Color of the background of the waveform. - - - **[Bar Color Start (color)]:** - First color of the waveform bars. - - - **[Bar Color End (color)]:** - Second color of the waveform bars. - - - **[Background Image (image)]:** - Background image that you wish to be attached to the generated video along with the waveform. - - - **[Height and Width (numbers)]:** - Output video resolution, only works with image. - (minimum height and width is 256). - - --- - - ### Settings Tab: - - - **[Output Audio Channels (selection)]:** - With this you can select the amount of channels that you wish for your output audio. - `mono` is a straightforward single channel audio - `stereo` is a dual channel audio but it will sound more or less like mono - `stereo effect` this one is also dual channel but uses tricks to simulate a stereo audio. - - - **[Output Audio Sample Rate (dropdown)]:** - The output audio sample rate, the model default is 32000. - - - **[Top-k (number)]:** - is a parameter used in text generation models, including music generation models. It determines the number of most likely next tokens to consider at each step of the generation process. The model ranks all possible tokens based on their predicted probabilities, and then selects the top-k tokens from the ranked list. The model then samples from this reduced set of tokens to determine the next token in the generated sequence. A smaller value of k results in a more focused and deterministic output, while a larger value of k allows for more diversity in the generated music. - - - **[Top-p (number)]:** - also known as nucleus sampling or probabilistic sampling, is another method used for token selection during text generation. Instead of specifying a fixed number like top-k, top-p considers the cumulative probability distribution of the ranked tokens. It selects the smallest possible set of tokens whose cumulative probability exceeds a certain threshold (usually denoted as p). The model then samples from this set to choose the next token. This approach ensures that the generated output maintains a balance between diversity and coherence, as it allows for a varying number of tokens to be considered based on their probabilities. - - - **[Temperature (number)]:** - is a parameter that controls the randomness of the generated output. It is applied during the sampling process, where a higher temperature value results in more random and diverse outputs, while a lower temperature value leads to more deterministic and focused outputs. In the context of music generation, a higher temperature can introduce more variability and creativity into the generated music, but it may also lead to less coherent or structured compositions. On the other hand, a lower temperature can produce more repetitive and predictable music. - - - **[Classifier Free Guidance (number)]:** - refers to a technique used in some music generation models where a separate classifier network is trained to provide guidance or control over the generated music. This classifier is trained on labeled data to recognize specific musical characteristics or styles. During the generation process, the output of the generator model is evaluated by the classifier, and the generator is encouraged to produce music that aligns with the desired characteristics or style. This approach allows for more fine-grained control over the generated music, enabling users to specify certain attributes they want the model to capture. - """ - ) - with gr.Tab("Audio Info"): - gr.Markdown( - """ - ### Audio Info - """ - ) - with gr.Row(): - with gr.Column(): - in_audio = gr.File(type="file", label="Input Any Audio", interactive=True) - with gr.Row(): - send_gen = gr.Button("Send to MusicGen", variant="primary") - send_gen_a = gr.Button("Send to AudioGen", variant="primary") - with gr.Column(): - info = gr.Textbox(label="Audio Info", lines=10, interactive=False) - with gr.Tab("Changelog"): - gr.Markdown( - """ - ## Changelog: - - ### v2.0.0a - - - Forgot to move all the update to app.py from temp2.py... oops - - - - ### v2.0.0 - - - Changed name from MusicGen+ to AudioCraft Plus - - - Complete overhaul of the repo "backend" with the latest changes from the main facebookresearch repo - - - Added a new decoder: MultiBand_Diffusion - - - Added AudioGen: a new tab for generating audio - - - - ### v1.2.8c - - - Implemented Reverse compatibility for audio info tab with previous versions - - - - ### v1.2.8b - - - Fixed the error when loading default models - - - - ### v1.2.8a - - - Adapted Audio info tab to work with the new structure prompts feature - - - Now custom models actually work, make sure you select the correct base model - - - - ### v1.2.8 - - - Now you will also recieve json file with metadata of generated audio - - - Added error messages in Audio Info tab - - - Added structure prompts: you can select bpm, key and global prompt for all prompts - - - Added time display next to each prompt, can be calculated with "Calculate Timings" button - - - - ### v1.2.7 - - - When sending generated audio to Input Audio, it will send a backup audio with default settings - (best for continuos generation) - - - Added Metadata to generated audio (Thanks to AlexHK ♥) - - - Added Audio Info tab that will display the metadata of the input audio - - - Added "send to Text2Audio" button in Audio Info tab - - - Generated audio is now stored in the "output" folder (Thanks to AlexHK ♥) - - - Added an output area with generated files and download buttons - - - Enhanced Stereo effect (Thanks to AlexHK ♥) - - - - ### v1.2.6 - - - Added option to generate in stereo (instead of only mono) - - - Added dropdown for selecting output sample rate (model default is 32000) - - - - ### v1.2.5a - - - Added file cleaner (This comes from the main facebookresearch repo) - - - Reorganized a little, moved audio to a seperate tab - - - - ### v1.2.5 - - - Gave a unique lime theme to the webui - - - Added additional output for audio only - - - Added button to send generated audio to Input Audio - - - Added option to trim Input Audio - - - - ### v1.2.4 - - - Added mic input (This comes from the main facebookresearch repo) - - - - ### v1.2.3 - - - Added option to change video size to fit the image you upload - - - - ### v1.2.2 - - - Added Wiki, Changelog and About tabs - - - - ### v1.2.1 - - - Added tabs and organized the entire interface - - - Added option to attach image to the output video - - - Added option to load fine-tuned models (Yet to be tested) - - - - ### v1.2.0 - - - Added Multi-Prompt - - - - ### v1.1.3 - - - Added customization options for generated waveform - - - - ### v1.1.2 - - - Removed sample length limit: now you can input audio of any length as music sample - - - - ### v1.1.1 - - - Improved music sample audio quality when using music continuation - - - - ### v1.1.0 - - - Rebuilt the repo on top of the latest structure of the main MusicGen repo - - - Improved Music continuation feature - - - - ### v1.0.0 - Stable Version - - - Added Music continuation - """ - ) - with gr.Tab("About"): - gen_type = gr.Text(value="music", interactive=False, visible=False) - gen_type_a = gr.Text(value="audio", interactive=False, visible=False) - gr.Markdown( - """ - This is your private demo for [MusicGen](https://github.com/facebookresearch/audiocraft), a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284) - - ## MusicGen+ is an extended version of the original MusicGen by facebookresearch. - - ### Repo: https://github.com/GrandaddyShmax/audiocraft_plus/tree/plus - - --- - - ### This project was possible thanks to: - - #### GrandaddyShmax - https://github.com/GrandaddyShmax - - #### Camenduru - https://github.com/camenduru - - #### rkfg - https://github.com/rkfg - - #### oobabooga - https://github.com/oobabooga - - #### AlexHK - https://github.com/alanhk147 - """ - ) - - send_gen.click(info_to_params, inputs=[in_audio], outputs=[decoder, struc_prompts, global_prompt, bpm, key, scale, model, dropdown, basemodel, s, prompts[0], prompts[1], prompts[2], prompts[3], prompts[4], prompts[5], prompts[6], prompts[7], prompts[8], prompts[9], repeats[0], repeats[1], repeats[2], repeats[3], repeats[4], repeats[5], repeats[6], repeats[7], repeats[8], repeats[9], mode, duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select], queue=False) - reuse_seed.click(fn=lambda x: x, inputs=[seed_used], outputs=[seed], queue=False) - send_audio.click(fn=lambda x: x, inputs=[backup_only], outputs=[audio], queue=False) - submit.click(predict_full, inputs=[gen_type, model, decoder, dropdown, basemodel, s, struc_prompts, bpm, key, scale, global_prompt, prompts[0], prompts[1], prompts[2], prompts[3], prompts[4], prompts[5], prompts[6], prompts[7], prompts[8], prompts[9], repeats[0], repeats[1], repeats[2], repeats[3], repeats[4], repeats[5], repeats[6], repeats[7], repeats[8], repeats[9], audio, mode, trim_start, trim_end, duration, topk, topp, temperature, cfg_coef, seed, overlap, image, height, width, background, bar1, bar2, channel, sr_select], outputs=[output, audio_only, backup_only, download, seed_used]) - input_type.change(toggle_audio_src, input_type, [audio], queue=False, show_progress=False) - to_calc.click(calc_time, inputs=[gen_type, s, duration, overlap, repeats[0], repeats[1], repeats[2], repeats[3], repeats[4], repeats[5], repeats[6], repeats[7], repeats[8], repeats[9]], outputs=[calcs[0], calcs[1], calcs[2], calcs[3], calcs[4], calcs[5], calcs[6], calcs[7], calcs[8], calcs[9]], queue=False) - - send_gen_a.click(info_to_params_a, inputs=[in_audio], outputs=[decoder_a, struc_prompts_a, global_prompt_a, s_a, prompts_a[0], prompts_a[1], prompts_a[2], prompts_a[3], prompts_a[4], prompts_a[5], prompts_a[6], prompts_a[7], prompts_a[8], prompts_a[9], repeats_a[0], repeats_a[1], repeats_a[2], repeats_a[3], repeats_a[4], repeats_a[5], repeats_a[6], repeats_a[7], repeats_a[8], repeats_a[9], duration_a, topk_a, topp_a, temperature_a, cfg_coef_a, seed_a, overlap_a, channel_a, sr_select_a], queue=False) - reuse_seed_a.click(fn=lambda x: x, inputs=[seed_used_a], outputs=[seed_a], queue=False) - send_audio_a.click(fn=lambda x: x, inputs=[backup_only_a], outputs=[audio_a], queue=False) - submit_a.click(predict_full, inputs=[gen_type_a, model_a, decoder_a, dropdown, basemodel, s_a, struc_prompts_a, bpm, key, scale, global_prompt_a, prompts_a[0], prompts_a[1], prompts_a[2], prompts_a[3], prompts_a[4], prompts_a[5], prompts_a[6], prompts_a[7], prompts_a[8], prompts_a[9], repeats_a[0], repeats_a[1], repeats_a[2], repeats_a[3], repeats_a[4], repeats_a[5], repeats_a[6], repeats_a[7], repeats_a[8], repeats_a[9], audio_a, mode_a, trim_start_a, trim_end_a, duration_a, topk_a, topp_a, temperature_a, cfg_coef_a, seed_a, overlap_a, image_a, height_a, width_a, background_a, bar1_a, bar2_a, channel_a, sr_select_a], outputs=[output_a, audio_only_a, backup_only_a, download_a, seed_used_a]) - input_type_a.change(toggle_audio_src, input_type_a, [audio_a], queue=False, show_progress=False) - to_calc_a.click(calc_time, inputs=[gen_type_a, s_a, duration_a, overlap_a, repeats_a[0], repeats_a[1], repeats_a[2], repeats_a[3], repeats_a[4], repeats_a[5], repeats_a[6], repeats_a[7], repeats_a[8], repeats_a[9]], outputs=[calcs_a[0], calcs_a[1], calcs_a[2], calcs_a[3], calcs_a[4], calcs_a[5], calcs_a[6], calcs_a[7], calcs_a[8], calcs_a[9]], queue=False) - - in_audio.change(get_audio_info, in_audio, outputs=[info]) - - def variable_outputs(k): - k = int(k) - 1 - return [gr.Textbox.update(visible=True)]*k + [gr.Textbox.update(visible=False)]*(max_textboxes-k) - def get_size(image): - if image is not None: - img = Image.open(image) - img_height = img.height - img_width = img.width - if (img_height%2) != 0: - img_height = img_height + 1 - if (img_width%2) != 0: - img_width = img_width + 1 - return img_height, img_width - else: - return 512, 768 - - image.change(get_size, image, outputs=[height, width]) - image_a.change(get_size, image_a, outputs=[height_a, width_a]) - s.change(variable_outputs, s, textboxes) - s_a.change(variable_outputs, s_a, textboxes_a) - interface.queue().launch(**launch_kwargs) - - -def ui_batched(launch_kwargs): - with gr.Blocks() as demo: - gr.Markdown( - """ - # MusicGen - - This is the demo for [MusicGen](https://github.com/facebookresearch/audiocraft), - a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284). -
    - - Duplicate Space - for longer sequences, more control and no queue.

    - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text(label="Describe your music", lines=2, interactive=True) - with gr.Column(): - radio = gr.Radio(["file", "mic"], value="file", - label="Condition on a melody (optional) File or Mic") - melody = gr.Audio(source="upload", type="numpy", label="File", - interactive=True, elem_id="melody-input") - with gr.Row(): - submit = gr.Button("Generate") - with gr.Column(): - output = gr.Video(label="Generated Music") - audio_output = gr.Audio(label="Generated Music (wav)", type='filepath') - submit.click(predict_batched, inputs=[text, melody], - outputs=[output, audio_output], batch=True, max_batch_size=MAX_BATCH_SIZE) - radio.change(toggle_audio_src, radio, [melody], queue=False, show_progress=False) - gr.Examples( - fn=predict_batched, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - ], - [ - "90s rock song with electric guitar and heavy drums", - None, - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130", - "./assets/bach.mp3", - ], - [ - "lofi slow bpm electro chill with organic samples", - None, - ], - ], - inputs=[text, melody], - outputs=[output] - ) - gr.Markdown(""" - ### More details - - The model will generate 12 seconds of audio based on the description you provided. - You can optionally provide a reference audio from which a broad melody will be extracted. - The model will then try to follow both the description and melody provided. - All samples are generated with the `melody` model. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """) - - demo.queue(max_size=8 * 4).launch(**launch_kwargs) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - '--listen', - type=str, - default='0.0.0.0' if 'SPACE_ID' in os.environ else '127.0.0.1', - help='IP to listen on for connections to Gradio', - ) - parser.add_argument( - '--username', type=str, default='', help='Username for authentication' - ) - parser.add_argument( - '--password', type=str, default='', help='Password for authentication' - ) - parser.add_argument( - '--server_port', - type=int, - default=0, - help='Port to run the server listener on', - ) - parser.add_argument( - '--inbrowser', action='store_true', help='Open in browser' - ) - parser.add_argument( - '--share', action='store_true', help='Share the gradio UI' - ) - parser.add_argument( - '--unload_model', action='store_true', help='Unload the model after every generation to save GPU memory' - ) - - parser.add_argument( - '--unload_to_cpu', action='store_true', help='Move the model to main RAM after every generation to save GPU memory but reload faster than after full unload (see above)' - ) - - parser.add_argument( - '--cache', action='store_true', help='Cache models in RAM to quickly switch between them' - ) - - args = parser.parse_args() - UNLOAD_MODEL = args.unload_model - MOVE_TO_CPU = args.unload_to_cpu - if args.cache: - MODELS = {} - - launch_kwargs = {} - launch_kwargs['server_name'] = args.listen - - if args.username and args.password: - launch_kwargs['auth'] = (args.username, args.password) - if args.server_port: - launch_kwargs['server_port'] = args.server_port - if args.inbrowser: - launch_kwargs['inbrowser'] = args.inbrowser - if args.share: - launch_kwargs['share'] = args.share - - # Show the interface - if IS_BATCHED: - global USE_DIFFUSION - USE_DIFFUSION = False - ui_batched(launch_kwargs) - else: - ui_full(launch_kwargs) \ No newline at end of file diff --git a/spaces/HARISH246/3D/app.py b/spaces/HARISH246/3D/app.py deleted file mode 100644 index e63051eaff21d59dcbbb54ba5e9700c5295bd71e..0000000000000000000000000000000000000000 --- a/spaces/HARISH246/3D/app.py +++ /dev/null @@ -1,261 +0,0 @@ -import os -from PIL import Image -import torch - -from point_e.diffusion.configs import DIFFUSION_CONFIGS, diffusion_from_config -from point_e.diffusion.sampler import PointCloudSampler -from point_e.models.download import load_checkpoint -from point_e.models.configs import MODEL_CONFIGS, model_from_config -from point_e.util.plotting import plot_point_cloud -from point_e.util.pc_to_mesh import marching_cubes_mesh - -import skimage.measure - -from pyntcloud import PyntCloud -import matplotlib.colors -import plotly.graph_objs as go - -import trimesh - -import gradio as gr - - -state = "" -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -def set_state(s): - print(s) - global state - state = s - -def get_state(): - return state - -set_state('Creating txt2mesh model...') -t2m_name = 'base40M-textvec' -t2m_model = model_from_config(MODEL_CONFIGS[t2m_name], device) -t2m_model.eval() -base_diffusion_t2m = diffusion_from_config(DIFFUSION_CONFIGS[t2m_name]) - -set_state('Downloading txt2mesh checkpoint...') -t2m_model.load_state_dict(load_checkpoint(t2m_name, device)) - - -def load_img2mesh_model(model_name): - set_state(f'Creating img2mesh model {model_name}...') - i2m_name = model_name - i2m_model = model_from_config(MODEL_CONFIGS[i2m_name], device) - i2m_model.eval() - base_diffusion_i2m = diffusion_from_config(DIFFUSION_CONFIGS[i2m_name]) - - set_state(f'Downloading img2mesh checkpoint {model_name}...') - i2m_model.load_state_dict(load_checkpoint(i2m_name, device)) - - return i2m_model, base_diffusion_i2m - -img2mesh_model_name = 'base40M' #'base300M' #'base1B' -i2m_model, base_diffusion_i2m = load_img2mesh_model(img2mesh_model_name) - - -set_state('Creating upsample model...') -upsampler_model = model_from_config(MODEL_CONFIGS['upsample'], device) -upsampler_model.eval() -upsampler_diffusion = diffusion_from_config(DIFFUSION_CONFIGS['upsample']) - -set_state('Downloading upsampler checkpoint...') -upsampler_model.load_state_dict(load_checkpoint('upsample', device)) - -set_state('Creating SDF model...') -sdf_name = 'sdf' -sdf_model = model_from_config(MODEL_CONFIGS[sdf_name], device) -sdf_model.eval() - -set_state('Loading SDF model...') -sdf_model.load_state_dict(load_checkpoint(sdf_name, device)) - -stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5") - - -set_state('') - -def get_sampler(model_name, txt2obj, guidance_scale): - - global img2mesh_model_name - global base_diffusion_i2m - global i2m_model - if model_name != img2mesh_model_name: - img2mesh_model_name = model_name - i2m_model, base_diffusion_i2m = load_img2mesh_model(model_name) - - return PointCloudSampler( - device=device, - models=[t2m_model if txt2obj else i2m_model, upsampler_model], - diffusions=[base_diffusion_t2m if txt2obj else base_diffusion_i2m, upsampler_diffusion], - num_points=[1024, 4096 - 1024], - aux_channels=['R', 'G', 'B'], - guidance_scale=[guidance_scale, 0.0 if txt2obj else guidance_scale], - model_kwargs_key_filter=('texts', '') if txt2obj else ("*",) - ) - -def generate_txt2img(prompt): - - prompt = f"“a 3d rendering of {prompt}, full view, white background" - gallery_dir = stable_diffusion(prompt, fn_index=2) - imgs = [os.path.join(gallery_dir, img) for img in os.listdir(gallery_dir) if os.path.splitext(img)[1] == '.jpg'] - - return imgs[0], gr.update(visible=True) - -def generate_3D(input, model_name='base40M', guidance_scale=3.0, grid_size=32): - - set_state('Entered generate function...') - - if isinstance(input, Image.Image): - input = prepare_img(input) - - # if input is a string, it's a text prompt - sampler = get_sampler(model_name, txt2obj=True if isinstance(input, str) else False, guidance_scale=guidance_scale) - - # Produce a sample from the model. - set_state('Sampling...') - samples = None - kw_args = dict(texts=[input]) if isinstance(input, str) else dict(images=[input]) - for x in sampler.sample_batch_progressive(batch_size=1, model_kwargs=kw_args): - samples = x - - set_state('Converting to point cloud...') - pc = sampler.output_to_point_clouds(samples)[0] - - set_state('Saving point cloud...') - with open("point_cloud.ply", "wb") as f: - pc.write_ply(f) - - set_state('Converting to mesh...') - save_ply(pc, 'mesh.ply', grid_size) - - set_state('') - - return pc_to_plot(pc), ply_to_obj('mesh.ply', '3d_model.obj'), gr.update(value=['3d_model.obj', 'mesh.ply', 'point_cloud.ply'], visible=True) - -def prepare_img(img): - - w, h = img.size - if w > h: - img = img.crop((w - h) / 2, 0, w - (w - h) / 2, h) - else: - img = img.crop((0, (h - w) / 2, w, h - (h - w) / 2)) - - # resize to 256x256 - img = img.resize((256, 256)) - - return img - -def pc_to_plot(pc): - - return go.Figure( - data=[ - go.Scatter3d( - x=pc.coords[:,0], y=pc.coords[:,1], z=pc.coords[:,2], - mode='markers', - marker=dict( - size=2, - color=['rgb({},{},{})'.format(r,g,b) for r,g,b in zip(pc.channels["R"], pc.channels["G"], pc.channels["B"])], - ) - ) - ], - layout=dict( - scene=dict(xaxis=dict(visible=False), yaxis=dict(visible=False), zaxis=dict(visible=False)) - ), - ) - -def ply_to_obj(ply_file, obj_file): - mesh = trimesh.load(ply_file) - mesh.export(obj_file) - - return obj_file - -def save_ply(pc, file_name, grid_size): - - # Produce a mesh (with vertex colors) - mesh = marching_cubes_mesh( - pc=pc, - model=sdf_model, - batch_size=4096, - grid_size=grid_size, # increase to 128 for resolution used in evals - progress=True, - ) - - # Write the mesh to a PLY file to import into some other program. - with open(file_name, 'wb') as f: - mesh.write_ply(f) - - -with gr.Blocks() as app: - gr.Markdown("# Image-to-3D") - - - with gr.Row(): - with gr.Column(): - with gr.Tab("Image to 3D"): - img = gr.Image(label="Image") - gr.Markdown("Best results with images of 3D objects with no shadows on a white background.") - btn_generate_img2obj = gr.Button(value="Generate") - - with gr.Tab("Text to 3D"): - gr.Markdown("Generate an image with Stable Diffusion, then convert it to 3D. Just enter the object you want to generate.") - prompt_sd = gr.Textbox(label="Prompt", placeholder="a 3d rendering of [your prompt], full view, white background") - btn_generate_txt2sd = gr.Button(value="Generate image") - img_sd = gr.Image(label="Image") - btn_generate_sd2obj = gr.Button(value="Convert to 3D", visible=False) - - with gr.Accordion("Advanced settings", open=False): - dropdown_models = gr.Dropdown(label="Model", value="base40M", choices=["base40M", "base300M"]) #, "base1B"]) - guidance_scale = gr.Slider(label="Guidance scale", value=3.0, minimum=3.0, maximum=10.0, step=0.1) - grid_size = gr.Slider(label="Grid size (for .obj 3D model)", value=32, minimum=16, maximum=128, step=16) - - with gr.Column(): - plot = gr.Plot(label="Point cloud") - # btn_pc_to_obj = gr.Button(value="Convert to OBJ", visible=False) - model_3d = gr.Model3D(value=None) - file_out = gr.File(label="Files", visible=False) - - # state_info = state_info = gr.Textbox(label="State", show_label=False).style(container=False) - - - # inputs = [dropdown_models, prompt, img, guidance_scale, grid_size] - outputs = [plot, model_3d, file_out] - - btn_generate_img2obj.click(generate_3D, inputs=[img, dropdown_models, guidance_scale, grid_size], outputs=outputs) - - prompt_sd.submit(generate_txt2img, inputs=prompt_sd, outputs=[img_sd, btn_generate_sd2obj]) - btn_generate_txt2sd.click(generate_txt2img, inputs=prompt_sd, outputs=[img_sd, btn_generate_sd2obj], queue=False) - btn_generate_sd2obj.click(generate_3D, inputs=[img, dropdown_models, guidance_scale, grid_size], outputs=outputs) - - # btn_pc_to_obj.click(ply_to_obj, inputs=plot, outputs=[model_3d, file_out]) - - gr.Examples( - examples=[ - ["images/corgi.png"], - ["images/cube_stack.jpg"], - ["images/chair.png"], - ], - inputs=[img], - outputs=outputs, - fn=generate_3D, - cache_examples=False - ) - - # app.load(get_state, inputs=[], outputs=state_info, every=0.5, show_progress=False) - - gr.HTML(""" -

    -
    -
    -

    Space by:
    - Twitter Follow
    - GitHub followers


    - Buy Me A Coffee

    -

    visitors

    -
    - """) - -app.queue(max_size=250, concurrency_count=6).launch() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py deleted file mode 100644 index ac6340fa0744a08d2b527972dfc669573fb4e1c3..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from argparse import Namespace - -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.optim import FairseqOptimizer - - -class FairseqLRScheduler(object): - def __init__(self, cfg, optimizer): - super().__init__() - if optimizer is not None and not isinstance(optimizer, FairseqOptimizer): - raise ValueError("optimizer must be an instance of FairseqOptimizer") - self.cfg = cfg - self.optimizer = optimizer - self.best = None - - @classmethod - def add_args(cls, parser): - """Add arguments to the parser for this LR scheduler.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - gen_parser_from_dataclass(parser, dc()) - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {"best": self.best} - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - self.best = state_dict["best"] - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - pass - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - if val_loss is not None: - if self.best is None: - self.best = val_loss - else: - self.best = min(self.best, val_loss) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return self.optimizer.get_lr() - - def reinit(self, total_num_update, num_updates): - pass - - -class LegacyFairseqLRScheduler(FairseqLRScheduler): - def __init__(self, args: Namespace, optimizer): - if not isinstance(optimizer, FairseqOptimizer): - raise ValueError("optimizer must be an instance of FairseqOptimizer") - self.args = args - self.optimizer = optimizer - self.best = None diff --git a/spaces/Harveenchadha/oiTrans/subword-nmt/subword_nmt/tests/test_glossaries.py b/spaces/Harveenchadha/oiTrans/subword-nmt/subword_nmt/tests/test_glossaries.py deleted file mode 100644 index 2ff7da19fb00a8b8c9e7d33a67d6db4f0c72ef6c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/subword-nmt/subword_nmt/tests/test_glossaries.py +++ /dev/null @@ -1,137 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import unittest -import mock - -import os,sys,inspect -currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) -parentdir = os.path.dirname(currentdir) -sys.path.insert(0,parentdir) - -from apply_bpe import isolate_glossary, BPE - -class TestIsolateGlossaryFunction(unittest.TestCase): - - def setUp(self): - self.glossary = 'like' - - def _run_test_case(self, test_case): - orig, expected = test_case - out = isolate_glossary(orig, self.glossary) - self.assertEqual(out, expected) - - def test_empty_string(self): - orig = '' - exp = [''] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_no_glossary(self): - orig = 'word' - exp = ['word'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_isolated_glossary(self): - orig = 'like' - exp = ['like'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_word_one_side(self): - orig = 'likeword' - exp = ['like', 'word'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_words_both_sides(self): - orig = 'wordlikeword' - exp = ['word', 'like', 'word'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_back_to_back_glossary(self): - orig = 'likelike' - exp = ['like', 'like'] - test_case = (orig, exp) - self._run_test_case(test_case) - - def test_multiple_glossaries(self): - orig = 'wordlikewordlike' - exp = ['word', 'like', 'word', 'like'] - test_case = (orig, exp) - self._run_test_case(test_case) - -class TestBPEIsolateGlossariesMethod(unittest.TestCase): - - def setUp(self): - - amock = mock.MagicMock() - amock.readline.return_value = 'something' - glossaries = ['like', 'Manuel', 'USA'] - self.bpe = BPE(amock, glossaries=glossaries) - - def _run_test_case(self, test_case): - orig, expected = test_case - out = self.bpe._isolate_glossaries(orig) - self.assertEqual(out, expected) - - def test_multiple_glossaries(self): - orig = 'wordlikeUSAwordManuelManuelwordUSA' - exp = ['word', 'like', 'USA', 'word', 'Manuel', 'Manuel', 'word', 'USA'] - test_case = (orig, exp) - self._run_test_case(test_case) - -class TestRegexIsolateGlossaries(unittest.TestCase): - - def setUp(self): - - amock = mock.MagicMock() - amock.readline.return_value = 'something' - glossaries = ["\w*", "\w*", "\d+"] - self.bpe = BPE(amock, glossaries=glossaries) - - def _run_test_case(self, test_case): - orig, expected = test_case - out = self.bpe._isolate_glossaries(orig) - self.assertEqual(out, expected) - - def test_regex_glossaries(self): - orig = 'wordlikeUSAword10001wordManuelwordUSA' - exp = ['wordlike', 'USA', 'word', '10001', 'word', 'Manuel', 'word', 'USA'] - test_case = (orig, exp) - self._run_test_case(test_case) - -def encode_mock(segment, x2, x3, x4, x5, x6, x7, glosses, dropout): - if glosses.match(segment): - return (segment,) - else: - l = len(segment) - return (segment[:l//2], segment[l//2:]) - -class TestBPESegmentMethod(unittest.TestCase): - - def setUp(self): - - amock = mock.MagicMock() - amock.readline.return_value = 'something' - glossaries = ['like', 'Manuel', 'USA'] - self.bpe = BPE(amock, glossaries=glossaries) - - @mock.patch('apply_bpe.encode', side_effect=encode_mock) - def _run_test_case(self, test_case, encode_function): - - orig, expected = test_case - out = self.bpe.segment(orig) - - self.assertEqual(out, expected) - - def test_multiple_glossaries(self): - orig = 'wordlikeword likeManuelword' - exp = 'wo@@ rd@@ like@@ wo@@ rd like@@ Manuel@@ wo@@ rd' - test_case = (orig, exp) - self._run_test_case(test_case) - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/module.e2741a44.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/module.e2741a44.js deleted file mode 100644 index 4c49d8296bfb127d40bed73416f0010a49bcdb97..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/module.e2741a44.js +++ /dev/null @@ -1,2 +0,0 @@ -const w=t=>n=>{const e=t(n);return n.add(e),e},N=t=>(n,e)=>(t.set(n,e),e),f=Number.MAX_SAFE_INTEGER===void 0?9007199254740991:Number.MAX_SAFE_INTEGER,g=536870912,_=g*2,O=(t,n)=>e=>{const r=n.get(e);let s=r===void 0?e.size:r<_?r+1:0;if(!e.has(s))return t(e,s);if(e.sizef)throw new Error("Congratulations, you created a collection of unique numbers which uses all available integers!");for(;e.has(s);)s=Math.floor(Math.random()*f);return t(e,s)},M=new WeakMap,m=N(M),h=O(m,M),I=w(h),R=t=>typeof t.start=="function",p=new WeakMap,A=t=>({...t,connect:({call:n})=>async()=>{const{port1:e,port2:r}=new MessageChannel,s=await n("connect",{port:e},[e]);return p.set(r,s),r},disconnect:({call:n})=>async e=>{const r=p.get(e);if(r===void 0)throw new Error("The given port is not connected.");await n("disconnect",{portId:r})},isSupported:({call:n})=>()=>n("isSupported")}),E=new WeakMap,b=t=>{if(E.has(t))return E.get(t);const n=new Map;return E.set(t,n),n},W=t=>{const n=A(t);return e=>{const r=b(e);e.addEventListener("message",({data:o})=>{const{id:a}=o;if(a!==null&&r.has(a)){const{reject:u,resolve:c}=r.get(a);r.delete(a),o.error===void 0?c(o.result):u(new Error(o.error.message))}}),R(e)&&e.start();const s=(o,a=null,u=[])=>new Promise((c,l)=>{const d=h(r);r.set(d,{reject:l,resolve:c}),a===null?e.postMessage({id:d,method:o},u):e.postMessage({id:d,method:o,params:a},u)}),T=(o,a,u=[])=>{e.postMessage({id:null,method:o,params:a},u)};let i={};for(const[o,a]of Object.entries(n))i={...i,[o]:a({call:s,notify:T})};return{...i}}};export{I as a,W as c,h as g}; -//# sourceMappingURL=module.e2741a44.js.map diff --git a/spaces/Hila/RobustViT/imagenet_finetune.py b/spaces/Hila/RobustViT/imagenet_finetune.py deleted file mode 100644 index 8ea91a99b6cd8a9f334ea907ea85fd5f8f29c4af..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/imagenet_finetune.py +++ /dev/null @@ -1,567 +0,0 @@ -import argparse -import os -import random -import shutil -import time -import warnings - -import torch -import torch.nn as nn -import torch.nn.parallel -import torch.backends.cudnn as cudnn -import torch.distributed as dist -import torch.optim -import torch.multiprocessing as mp -import torch.utils.data -import torch.utils.data.distributed -import torchvision.transforms as transforms -import torchvision.datasets as datasets -import torchvision.models as models -from segmentation_dataset import SegmentationDataset, VAL_PARTITION, TRAIN_PARTITION - -# Uncomment the expected model below - -# ViT -from ViT.ViT import vit_base_patch16_224 as vit -# from ViT.ViT import vit_large_patch16_224 as vit - -# ViT-AugReg -# from ViT.ViT_new import vit_small_patch16_224 as vit -# from ViT.ViT_new import vit_base_patch16_224 as vit -# from ViT.ViT_new import vit_large_patch16_224 as vit - -# DeiT -# from ViT.ViT import deit_base_patch16_224 as vit -# from ViT.ViT import deit_small_patch16_224 as vit - -from ViT.explainer import generate_relevance, get_image_with_relevance -import torchvision -import cv2 -from torch.utils.tensorboard import SummaryWriter -import json - -model_names = sorted(name for name in models.__dict__ - if name.islower() and not name.startswith("__") - and callable(models.__dict__[name])) -model_names.append("vit") - -parser = argparse.ArgumentParser(description='PyTorch ImageNet Training') -parser.add_argument('--data', metavar='DATA', - help='path to dataset') -parser.add_argument('--seg_data', metavar='SEG_DATA', - help='path to segmentation dataset') -parser.add_argument('-j', '--workers', default=4, type=int, metavar='N', - help='number of data loading workers (default: 4)') -parser.add_argument('--epochs', default=50, type=int, metavar='N', - help='number of total epochs to run') -parser.add_argument('--start-epoch', default=0, type=int, metavar='N', - help='manual epoch number (useful on restarts)') -parser.add_argument('-b', '--batch-size', default=8, type=int, - metavar='N', - help='mini-batch size (default: 256), this is the total ' - 'batch size of all GPUs on the current node when ' - 'using Data Parallel or Distributed Data Parallel') -parser.add_argument('--lr', '--learning-rate', default=3e-6, type=float, - metavar='LR', help='initial learning rate', dest='lr') -parser.add_argument('--momentum', default=0.9, type=float, metavar='M', - help='momentum') -parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float, - metavar='W', help='weight decay (default: 1e-4)', - dest='weight_decay') -parser.add_argument('-p', '--print-freq', default=10, type=int, - metavar='N', help='print frequency (default: 10)') -parser.add_argument('--resume', default='', type=str, metavar='PATH', - help='path to latest checkpoint (default: none)') -parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true', - help='evaluate model on validation set') -parser.add_argument('--pretrained', dest='pretrained', action='store_true', - help='use pre-trained model') -parser.add_argument('--world-size', default=-1, type=int, - help='number of nodes for distributed training') -parser.add_argument('--rank', default=-1, type=int, - help='node rank for distributed training') -parser.add_argument('--dist-url', default='tcp://224.66.41.62:23456', type=str, - help='url used to set up distributed training') -parser.add_argument('--dist-backend', default='nccl', type=str, - help='distributed backend') -parser.add_argument('--gpu', default=None, type=int, - help='GPU id to use.') -parser.add_argument('--save_interval', default=20, type=int, - help='interval to save segmentation results.') -parser.add_argument('--num_samples', default=3, type=int, - help='number of samples per class for training') -parser.add_argument('--multiprocessing-distributed', action='store_true', - help='Use multi-processing distributed training to launch ' - 'N processes per node, which has N GPUs. This is the ' - 'fastest way to use PyTorch for either single node or ' - 'multi node data parallel training') -parser.add_argument('--lambda_seg', default=0.8, type=float, - help='influence of segmentation loss.') -parser.add_argument('--lambda_acc', default=0.2, type=float, - help='influence of accuracy loss.') -parser.add_argument('--experiment_folder', default=None, type=str, - help='path to folder to use for experiment.') -parser.add_argument('--dilation', default=0, type=float, - help='Use dilation on the segmentation maps.') -parser.add_argument('--lambda_background', default=2, type=float, - help='coefficient of loss for segmentation background.') -parser.add_argument('--lambda_foreground', default=0.3, type=float, - help='coefficient of loss for segmentation foreground.') -parser.add_argument('--num_classes', default=500, type=int, - help='coefficient of loss for segmentation foreground.') -parser.add_argument('--temperature', default=1, type=float, - help='temperature for softmax (mostly for DeiT).') -parser.add_argument('--class_seed', default=None, type=int, - help='seed to randomly shuffle classes chosen for training.') - -best_loss = float('inf') - -def main(): - args = parser.parse_args() - - if args.experiment_folder is None: - args.experiment_folder = f'experiment/' \ - f'lr_{args.lr}_seg_{args.lambda_seg}_acc_{args.lambda_acc}' \ - f'_bckg_{args.lambda_background}_fgd_{args.lambda_foreground}' - if args.temperature != 1: - args.experiment_folder = args.experiment_folder + f'_tempera_{args.temperature}' - if args.batch_size != 8: - args.experiment_folder = args.experiment_folder + f'_bs_{args.batch_size}' - if args.num_classes != 500: - args.experiment_folder = args.experiment_folder + f'_num_classes_{args.num_classes}' - if args.num_samples != 3: - args.experiment_folder = args.experiment_folder + f'_num_samples_{args.num_samples}' - if args.epochs != 150: - args.experiment_folder = args.experiment_folder + f'_num_epochs_{args.epochs}' - if args.class_seed is not None: - args.experiment_folder = args.experiment_folder + f'_seed_{args.class_seed}' - - if os.path.exists(args.experiment_folder): - raise Exception(f"Experiment path {args.experiment_folder} already exists!") - os.mkdir(args.experiment_folder) - os.mkdir(f'{args.experiment_folder}/train_samples') - os.mkdir(f'{args.experiment_folder}/val_samples') - - with open(f'{args.experiment_folder}/commandline_args.txt', 'w') as f: - json.dump(args.__dict__, f, indent=2) - - if args.gpu is not None: - warnings.warn('You have chosen a specific GPU. This will completely ' - 'disable data parallelism.') - - if args.dist_url == "env://" and args.world_size == -1: - args.world_size = int(os.environ["WORLD_SIZE"]) - - args.distributed = args.world_size > 1 or args.multiprocessing_distributed - - ngpus_per_node = torch.cuda.device_count() - if args.multiprocessing_distributed: - # Since we have ngpus_per_node processes per node, the total world_size - # needs to be adjusted accordingly - args.world_size = ngpus_per_node * args.world_size - # Use torch.multiprocessing.spawn to launch distributed processes: the - # main_worker process function - mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args)) - else: - # Simply call main_worker function - main_worker(args.gpu, ngpus_per_node, args) - - -def main_worker(gpu, ngpus_per_node, args): - global best_loss - args.gpu = gpu - - if args.gpu is not None: - print("Use GPU: {} for training".format(args.gpu)) - - if args.distributed: - if args.dist_url == "env://" and args.rank == -1: - args.rank = int(os.environ["RANK"]) - if args.multiprocessing_distributed: - # For multiprocessing distributed training, rank needs to be the - # global rank among all the processes - args.rank = args.rank * ngpus_per_node + gpu - dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, - world_size=args.world_size, rank=args.rank) - # create model - print("=> creating model") - model = vit(pretrained=True).cuda() - model.train() - print("done") - - if not torch.cuda.is_available(): - print('using CPU, this will be slow') - elif args.distributed: - # For multiprocessing distributed, DistributedDataParallel constructor - # should always set the single device scope, otherwise, - # DistributedDataParallel will use all available devices. - if args.gpu is not None: - torch.cuda.set_device(args.gpu) - model.cuda(args.gpu) - # When using a single GPU per process and per - # DistributedDataParallel, we need to divide the batch size - # ourselves based on the total number of GPUs we have - args.batch_size = int(args.batch_size / ngpus_per_node) - args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node) - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - else: - model.cuda() - # DistributedDataParallel will divide and allocate batch_size to all - # available GPUs if device_ids are not set - model = torch.nn.parallel.DistributedDataParallel(model) - elif args.gpu is not None: - torch.cuda.set_device(args.gpu) - model = model.cuda(args.gpu) - else: - # DataParallel will divide and allocate batch_size to all available GPUs - print("start") - model = torch.nn.DataParallel(model).cuda() - - # define loss function (criterion) and optimizer - criterion = nn.CrossEntropyLoss().cuda(args.gpu) - optimizer = torch.optim.AdamW(model.parameters(), args.lr, weight_decay=args.weight_decay) - - # optionally resume from a checkpoint - if args.resume: - if os.path.isfile(args.resume): - print("=> loading checkpoint '{}'".format(args.resume)) - if args.gpu is None: - checkpoint = torch.load(args.resume) - else: - # Map model to be loaded to specified single gpu. - loc = 'cuda:{}'.format(args.gpu) - checkpoint = torch.load(args.resume, map_location=loc) - args.start_epoch = checkpoint['epoch'] - best_loss = checkpoint['best_loss'] - if args.gpu is not None: - # best_loss may be from a checkpoint from a different GPU - best_loss = best_loss.to(args.gpu) - model.load_state_dict(checkpoint['state_dict']) - optimizer.load_state_dict(checkpoint['optimizer']) - print("=> loaded checkpoint '{}' (epoch {})" - .format(args.resume, checkpoint['epoch'])) - else: - print("=> no checkpoint found at '{}'".format(args.resume)) - - cudnn.benchmark = True - - train_dataset = SegmentationDataset(args.seg_data, args.data, partition=TRAIN_PARTITION, train_classes=args.num_classes, - num_samples=args.num_samples, seed=args.class_seed) - - if args.distributed: - train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) - else: - train_sampler = None - - train_loader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None), - num_workers=args.workers, pin_memory=True, sampler=train_sampler) - - val_dataset = SegmentationDataset(args.seg_data, args.data, partition=VAL_PARTITION, train_classes=args.num_classes, - num_samples=1, seed=args.class_seed) - - val_loader = torch.utils.data.DataLoader( - val_dataset, batch_size=10, shuffle=False, - num_workers=args.workers, pin_memory=True) - - if args.evaluate: - validate(val_loader, model, criterion, 0, args) - return - - for epoch in range(args.start_epoch, args.epochs): - if args.distributed: - train_sampler.set_epoch(epoch) - adjust_learning_rate(optimizer, epoch, args) - - log_dir = os.path.join(args.experiment_folder, 'logs') - logger = SummaryWriter(log_dir=log_dir) - args.logger = logger - - # train for one epoch - train(train_loader, model, criterion, optimizer, epoch, args) - - # evaluate on validation set - loss1 = validate(val_loader, model, criterion, epoch, args) - - # remember best acc@1 and save checkpoint - is_best = loss1 <= best_loss - best_loss = min(loss1, best_loss) - - if not args.multiprocessing_distributed or (args.multiprocessing_distributed - and args.rank % ngpus_per_node == 0): - save_checkpoint({ - 'epoch': epoch + 1, - 'state_dict': model.state_dict(), - 'best_loss': best_loss, - 'optimizer' : optimizer.state_dict(), - }, is_best, folder=args.experiment_folder) - - -def train(train_loader, model, criterion, optimizer, epoch, args): - mse_criterion = torch.nn.MSELoss(reduction='mean') - - losses = AverageMeter('Loss', ':.4e') - top1 = AverageMeter('Acc@1', ':6.2f') - top5 = AverageMeter('Acc@5', ':6.2f') - orig_top1 = AverageMeter('Acc@1_orig', ':6.2f') - orig_top5 = AverageMeter('Acc@5_orig', ':6.2f') - progress = ProgressMeter( - len(train_loader), - [losses, top1, top5, orig_top1, orig_top5], - prefix="Epoch: [{}]".format(epoch)) - - orig_model = vit(pretrained=True).cuda() - orig_model.eval() - - # switch to train mode - model.train() - - for i, (seg_map, image_ten, class_name) in enumerate(train_loader): - if torch.cuda.is_available(): - image_ten = image_ten.cuda(args.gpu, non_blocking=True) - seg_map = seg_map.cuda(args.gpu, non_blocking=True) - class_name = class_name.cuda(args.gpu, non_blocking=True) - - # segmentation loss - relevance = generate_relevance(model, image_ten, index=class_name) - - reverse_seg_map = seg_map.clone() - reverse_seg_map[reverse_seg_map == 1] = -1 - reverse_seg_map[reverse_seg_map == 0] = 1 - reverse_seg_map[reverse_seg_map == -1] = 0 - background_loss = mse_criterion(relevance * reverse_seg_map, torch.zeros_like(relevance)) - foreground_loss = mse_criterion(relevance * seg_map, seg_map) - segmentation_loss = args.lambda_background * background_loss - segmentation_loss += args.lambda_foreground * foreground_loss - - # classification loss - output = model(image_ten) - with torch.no_grad(): - output_orig = orig_model(image_ten) - - _, pred = output.topk(1, 1, True, True) - pred = pred.flatten() - - if args.temperature != 1: - output = output / args.temperature - classification_loss = criterion(output, pred) - - loss = args.lambda_seg * segmentation_loss + args.lambda_acc * classification_loss - - # debugging output - if i % args.save_interval == 0: - orig_relevance = generate_relevance(orig_model, image_ten, index=class_name) - for j in range(image_ten.shape[0]): - image = get_image_with_relevance(image_ten[j], torch.ones_like(image_ten[j])) - new_vis = get_image_with_relevance(image_ten[j], relevance[j]) - old_vis = get_image_with_relevance(image_ten[j], orig_relevance[j]) - gt = get_image_with_relevance(image_ten[j], seg_map[j]) - h_img = cv2.hconcat([image, gt, old_vis, new_vis]) - cv2.imwrite(f'{args.experiment_folder}/train_samples/res_{i}_{j}.jpg', h_img) - - # measure accuracy and record loss - acc1, acc5 = accuracy(output, class_name, topk=(1, 5)) - losses.update(loss.item(), image_ten.size(0)) - top1.update(acc1[0], image_ten.size(0)) - top5.update(acc5[0], image_ten.size(0)) - - # metrics for original vit - acc1_orig, acc5_orig = accuracy(output_orig, class_name, topk=(1, 5)) - orig_top1.update(acc1_orig[0], image_ten.size(0)) - orig_top5.update(acc5_orig[0], image_ten.size(0)) - - # compute gradient and do SGD step - optimizer.zero_grad() - loss.backward() - optimizer.step() - - if i % args.print_freq == 0: - progress.display(i) - args.logger.add_scalar('{}/{}'.format('train', 'segmentation_loss'), segmentation_loss, - epoch*len(train_loader)+i) - args.logger.add_scalar('{}/{}'.format('train', 'classification_loss'), classification_loss, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'orig_top1'), acc1_orig, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'top1'), acc1, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'orig_top5'), acc5_orig, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'top5'), acc5, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'tot_loss'), loss, - epoch * len(train_loader) + i) - - -def validate(val_loader, model, criterion, epoch, args): - mse_criterion = torch.nn.MSELoss(reduction='mean') - - losses = AverageMeter('Loss', ':.4e') - top1 = AverageMeter('Acc@1', ':6.2f') - top5 = AverageMeter('Acc@5', ':6.2f') - orig_top1 = AverageMeter('Acc@1_orig', ':6.2f') - orig_top5 = AverageMeter('Acc@5_orig', ':6.2f') - progress = ProgressMeter( - len(val_loader), - [losses, top1, top5, orig_top1, orig_top5], - prefix="Epoch: [{}]".format(val_loader)) - - # switch to evaluate mode - model.eval() - - orig_model = vit(pretrained=True).cuda() - orig_model.eval() - - with torch.no_grad(): - for i, (seg_map, image_ten, class_name) in enumerate(val_loader): - if args.gpu is not None: - image_ten = image_ten.cuda(args.gpu, non_blocking=True) - if torch.cuda.is_available(): - seg_map = seg_map.cuda(args.gpu, non_blocking=True) - class_name = class_name.cuda(args.gpu, non_blocking=True) - - # segmentation loss - with torch.enable_grad(): - relevance = generate_relevance(model, image_ten, index=class_name) - - reverse_seg_map = seg_map.clone() - reverse_seg_map[reverse_seg_map == 1] = -1 - reverse_seg_map[reverse_seg_map == 0] = 1 - reverse_seg_map[reverse_seg_map == -1] = 0 - background_loss = mse_criterion(relevance * reverse_seg_map, torch.zeros_like(relevance)) - foreground_loss = mse_criterion(relevance * seg_map, seg_map) - segmentation_loss = args.lambda_background * background_loss - segmentation_loss += args.lambda_foreground * foreground_loss - - # classification loss - with torch.no_grad(): - output = model(image_ten) - output_orig = orig_model(image_ten) - - _, pred = output.topk(1, 1, True, True) - pred = pred.flatten() - if args.temperature != 1: - output = output / args.temperature - classification_loss = criterion(output, pred) - - loss = args.lambda_seg * segmentation_loss + args.lambda_acc * classification_loss - - # save results - if i % args.save_interval == 0: - with torch.enable_grad(): - orig_relevance = generate_relevance(orig_model, image_ten, index=class_name) - for j in range(image_ten.shape[0]): - image = get_image_with_relevance(image_ten[j], torch.ones_like(image_ten[j])) - new_vis = get_image_with_relevance(image_ten[j], relevance[j]) - old_vis = get_image_with_relevance(image_ten[j], orig_relevance[j]) - gt = get_image_with_relevance(image_ten[j], seg_map[j]) - h_img = cv2.hconcat([image, gt, old_vis, new_vis]) - cv2.imwrite(f'{args.experiment_folder}/val_samples/res_{i}_{j}.jpg', h_img) - - # measure accuracy and record loss - acc1, acc5 = accuracy(output, class_name, topk=(1, 5)) - losses.update(loss.item(), image_ten.size(0)) - top1.update(acc1[0], image_ten.size(0)) - top5.update(acc5[0], image_ten.size(0)) - - # metrics for original vit - acc1_orig, acc5_orig = accuracy(output_orig, class_name, topk=(1, 5)) - orig_top1.update(acc1_orig[0], image_ten.size(0)) - orig_top5.update(acc5_orig[0], image_ten.size(0)) - - if i % args.print_freq == 0: - progress.display(i) - args.logger.add_scalar('{}/{}'.format('val', 'segmentation_loss'), segmentation_loss, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'classification_loss'), classification_loss, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'orig_top1'), acc1_orig, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'top1'), acc1, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'orig_top5'), acc5_orig, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'top5'), acc5, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'tot_loss'), loss, - epoch * len(val_loader) + i) - - # TODO: this should also be done with the ProgressMeter - print(' * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}' - .format(top1=top1, top5=top5)) - - return losses.avg - - -def save_checkpoint(state, is_best, folder, filename='checkpoint.pth.tar'): - torch.save(state, f'{folder}/{filename}') - if is_best: - shutil.copyfile(f'{folder}/{filename}', f'{folder}/model_best.pth.tar') - - -class AverageMeter(object): - """Computes and stores the average and current value""" - def __init__(self, name, fmt=':f'): - self.name = name - self.fmt = fmt - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def __str__(self): - fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})' - return fmtstr.format(**self.__dict__) - - -class ProgressMeter(object): - def __init__(self, num_batches, meters, prefix=""): - self.batch_fmtstr = self._get_batch_fmtstr(num_batches) - self.meters = meters - self.prefix = prefix - - def display(self, batch): - entries = [self.prefix + self.batch_fmtstr.format(batch)] - entries += [str(meter) for meter in self.meters] - print('\t'.join(entries)) - - def _get_batch_fmtstr(self, num_batches): - num_digits = len(str(num_batches // 1)) - fmt = '{:' + str(num_digits) + 'd}' - return '[' + fmt + '/' + fmt.format(num_batches) + ']' - -def adjust_learning_rate(optimizer, epoch, args): - """Sets the learning rate to the initial LR decayed by 10 every 30 epochs""" - lr = args.lr * (0.85 ** (epoch // 2)) - for param_group in optimizer.param_groups: - param_group['lr'] = lr - - -def accuracy(output, target, topk=(1,)): - """Computes the accuracy over the k top predictions for the specified values of k""" - with torch.no_grad(): - maxk = max(topk) - batch_size = target.size(0) - - _, pred = output.topk(maxk, 1, True, True) - pred = pred.t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - - res = [] - for k in topk: - correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / batch_size)) - return res - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/Hoodady/3DFuse/cldm/logger.py b/spaces/Hoodady/3DFuse/cldm/logger.py deleted file mode 100644 index 6a8803846f2a8979f87f3cf9ea5b12869439e62f..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/cldm/logger.py +++ /dev/null @@ -1,76 +0,0 @@ -import os - -import numpy as np -import torch -import torchvision -from PIL import Image -from pytorch_lightning.callbacks import Callback -from pytorch_lightning.utilities.distributed import rank_zero_only - - -class ImageLogger(Callback): - def __init__(self, batch_frequency=2000, max_images=4, clamp=True, increase_log_steps=True, - rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False, - log_images_kwargs=None): - super().__init__() - self.rescale = rescale - self.batch_freq = batch_frequency - self.max_images = max_images - if not increase_log_steps: - self.log_steps = [self.batch_freq] - self.clamp = clamp - self.disabled = disabled - self.log_on_batch_idx = log_on_batch_idx - self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {} - self.log_first_step = log_first_step - - @rank_zero_only - def log_local(self, save_dir, split, images, global_step, current_epoch, batch_idx): - root = os.path.join(save_dir, "image_log", split) - for k in images: - grid = torchvision.utils.make_grid(images[k], nrow=4) - if self.rescale: - grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w - grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1) - grid = grid.numpy() - grid = (grid * 255).astype(np.uint8) - filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format(k, global_step, current_epoch, batch_idx) - path = os.path.join(root, filename) - os.makedirs(os.path.split(path)[0], exist_ok=True) - Image.fromarray(grid).save(path) - - def log_img(self, pl_module, batch, batch_idx, split="train"): - check_idx = batch_idx # if self.log_on_batch_idx else pl_module.global_step - if (self.check_frequency(check_idx) and # batch_idx % self.batch_freq == 0 - hasattr(pl_module, "log_images") and - callable(pl_module.log_images) and - self.max_images > 0): - logger = type(pl_module.logger) - - is_train = pl_module.training - if is_train: - pl_module.eval() - - with torch.no_grad(): - images = pl_module.log_images(batch, split=split, **self.log_images_kwargs) - - for k in images: - N = min(images[k].shape[0], self.max_images) - images[k] = images[k][:N] - if isinstance(images[k], torch.Tensor): - images[k] = images[k].detach().cpu() - if self.clamp: - images[k] = torch.clamp(images[k], -1., 1.) - - self.log_local(pl_module.logger.save_dir, split, images, - pl_module.global_step, pl_module.current_epoch, batch_idx) - - if is_train: - pl_module.train() - - def check_frequency(self, check_idx): - return check_idx % self.batch_freq == 0 - - def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - if not self.disabled: - self.log_img(pl_module, batch, batch_idx, split="train") diff --git a/spaces/HuggingFaceH4/human_eval_llm_leaderboard/src/init.py b/spaces/HuggingFaceH4/human_eval_llm_leaderboard/src/init.py deleted file mode 100644 index 70848c67fca02bfa66395bf904ad6b26b3182b43..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceH4/human_eval_llm_leaderboard/src/init.py +++ /dev/null @@ -1,53 +0,0 @@ -import os -from huggingface_hub import Repository - -H4_TOKEN = os.environ.get("H4_TOKEN", None) - - -def get_all_requested_models(requested_models_dir): - depth = 1 - file_names = [] - - for root, dirs, files in os.walk(requested_models_dir): - current_depth = root.count(os.sep) - requested_models_dir.count(os.sep) - if current_depth == depth: - file_names.extend([os.path.join(root, file) for file in files]) - - return set([file_name.lower().split("eval_requests/")[1] for file_name in file_names]) - -def load_all_info_from_hub(HUMAN_EVAL_REPO, GPT_4_EVAL_REPO): - human_eval_repo = None - if H4_TOKEN and not os.path.isdir("./human_evals"): - print("Pulling human evaluation repo") - human_eval_repo = Repository( - local_dir="./human_evals/", - clone_from=HUMAN_EVAL_REPO, - use_auth_token=H4_TOKEN, - repo_type="dataset", - ) - human_eval_repo.git_pull() - - gpt_4_eval_repo = None - if H4_TOKEN and not os.path.isdir("./gpt_4_evals"): - print("Pulling GPT-4 evaluation repo") - gpt_4_eval_repo = Repository( - local_dir="./gpt_4_evals/", - clone_from=GPT_4_EVAL_REPO, - use_auth_token=H4_TOKEN, - repo_type="dataset", - ) - gpt_4_eval_repo.git_pull() - - return human_eval_repo, gpt_4_eval_repo - - -#def load_results(model, benchmark, metric): -# file_path = os.path.join("autoevals", model, f"{model}-eval_{benchmark}.json") -# if not os.path.exists(file_path): -# return 0.0, None - -# with open(file_path) as fp: -# data = json.load(fp) -# accs = np.array([v[metric] for k, v in data["results"].items()]) -# mean_acc = np.mean(accs) -# return mean_acc, data["config"]["model_args"] diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/widgets/zipf.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/widgets/zipf.py deleted file mode 100644 index 583245365430e147b10b709ca860be968beb4692..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/widgets/zipf.py +++ /dev/null @@ -1,107 +0,0 @@ -import gradio as gr -import pandas as pd - -from widgets.widget_base import Widget -from data_measurements.dataset_statistics import DatasetStatisticsCacheClass as dmt_cls -import utils - -logs = utils.prepare_logging(__file__) - - -class Zipf(Widget): - def __init__(self): - self.zipf_table = gr.DataFrame(render=False) - self.alpha_warning = gr.Markdown( - value="Your alpha value is a bit on the high side, which means that the distribution over words in this dataset is a bit unnatural. This could be due to non-language items throughout the dataset.", - render=False, - visible=False, - ) - self.xmin_warning = gr.Markdown( - value="The minimum rank for this fit is a bit on the high side, which means that the frequencies of your most common words aren't distributed as would be expected by Zipf's law.", - render=False, - visible=False, - ) - self.zipf_summary = gr.Markdown(render=False) - self.zipf_plot = gr.Plot(render=False) - - def render(self): - with gr.TabItem("Vocabulary Distribution: Zipf's Law Fit"): - gr.Markdown( - "Use this widget for the counts of different words in your dataset, measuring the difference between the observed count and the expected count under Zipf's law." - ) - gr.Markdown( - """This shows how close the observed language is to an ideal - natural language distribution following [Zipf's law](https://en.wikipedia.org/wiki/Zipf%27s_law), - calculated by minimizing the [Kolmogorov-Smirnov (KS) statistic](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test).""" - ) - gr.Markdown( - """ - A Zipfian distribution follows the power law: $p(x) \propto x^{-α}$ with an ideal α value of 1. - - In general, an alpha greater than 2 or a minimum rank greater than 10 (take with a grain of salt) means that your distribution is relativaly _unnatural_ for natural language. This can be a sign of mixed artefacts in the dataset, such as HTML markup. - - Below, you can see the counts of each word in your dataset vs. the expected number of counts following a Zipfian distribution. - - ----- - - ### Here is your dataset's Zipf results: - """ - ) - self.zipf_table.render() - self.zipf_summary.render() - self.zipf_plot.render() - self.alpha_warning.render() - self.xmin_warning.render() - - def update(self, dstats: dmt_cls): - z = dstats.z - zipf_fig = dstats.zipf_fig - - zipf_summary = ( - "The optimal alpha based on this dataset is: **" - + str(round(z.alpha, 2)) - + "**, with a KS distance of: **" - + str(round(z.ks_distance, 2)) - ) - zipf_summary += ( - "**. This was fit with a minimum rank value of: **" - + str(int(z.xmin)) - + "**, which is the optimal rank *beyond which* the scaling regime of the power law fits best." - ) - - fit_results_table = pd.DataFrame.from_dict( - { - r"Alpha:": [str("%.2f" % z.alpha)], - "KS distance:": [str("%.2f" % z.ks_distance)], - "Min rank:": [str("%s" % int(z.xmin))], - }, - columns=["Results"], - orient="index", - ) - fit_results_table.index.name = "" - - output = { - self.zipf_table: fit_results_table, - self.zipf_summary: zipf_summary, - self.zipf_plot: zipf_fig, - self.alpha_warning: gr.Markdown.update(visible=False), - self.xmin_warning: gr.Markdown.update(visible=False), - } - if z.alpha > 2: - output[self.alpha_warning] = gr.Markdown.update(visible=True) - if z.xmin > 5: - output[self.xmin_warning] = gr.Markdown.update(visible=True) - return output - - @property - def output_components(self): - return [ - self.zipf_table, - self.zipf_plot, - self.zipf_summary, - self.alpha_warning, - self.xmin_warning, - ] - - def add_events(self, state: gr.State): - pass diff --git a/spaces/ICML2022/OFA/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py b/spaces/ICML2022/OFA/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py deleted file mode 100644 index 66d50d07ff2067b802b90a2aadd88df23153830a..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - -import numpy as np - - -aggregate_funcs = { - "std": np.std, - "var": np.var, - "median": np.median, - "mean": np.mean, - "min": np.min, - "max": np.max, -} - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--input_file", required=True, type=str) - parser.add_argument("-n", "--repeat_times", required=True, type=int) - parser.add_argument("-o", "--output_file", required=False) - parser.add_argument("-f", "--func", required=False, default="mean") - args = parser.parse_args() - - stream = open(args.output_file, "w") if args.output_file else sys.stdout - - segment_scores = [] - for line in open(args.input_file): - segment_scores.append(float(line.strip())) - if len(segment_scores) == args.repeat_times: - stream.write("{}\n".format(aggregate_funcs[args.func](segment_scores))) - segment_scores = [] - - -if __name__ == "__main__": - main() diff --git a/spaces/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1-Demo/launch.py b/spaces/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1-Demo/launch.py deleted file mode 100644 index 20f7bb0a8ffb6798f40db8ef94e1a7661f9362bc..0000000000000000000000000000000000000000 --- a/spaces/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1-Demo/launch.py +++ /dev/null @@ -1,205 +0,0 @@ -#!/usr/bin/env python -# this code modify from https://huggingface.co/spaces/lykeven/visualglm-6b -import gradio as gr -import re -from PIL import Image -import torch -from io import BytesIO -import hashlib -import os -from transformers import LlamaForCausalLM, LlamaTokenizer, BlipImageProcessor, BitsAndBytesConfig, AutoModelForCausalLM - -DESCRIPTION = '''# Ziya-Blip2-14B''' - -MAINTENANCE_NOTICE1 = 'Hint 1: If the app report "Something went wrong, connection error out", please turn off your proxy and retry.\nHint 2: If you upload a large size of image like 10MB, it may take some time to upload and process. Please be patient and wait.' -MAINTENANCE_NOTICE2 = '提示1: 如果应用报了“Something went wrong, connection error out”的错误,请关闭代理并重试。\n提示2: 如果你上传了很大的图片,比如10MB大小,那将需要一些时间来上传和处理,请耐心等待。' - -NOTES = 'This app is adapted from https://huggingface.co/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1. It would be recommended to check out the repo if you want to see the detail of our model. And most of the codes attach to this demo are modify from lykeven/visualglm-6b.' - -import json - -default_chatbox = [] - - -def is_chinese(text): - zh_pattern = re.compile(u'[\u4e00-\u9fa5]+') - return zh_pattern.search(text) - -AUTH_TOKEN = os.getenv("AUTH_TOKEN") - -LM_MODEL_PATH = "wuxiaojun/Ziya-LLaMA-13B-v1" -# LM_MODEL_PATH = "/cognitive_comp/wuxiaojun/pretrained/pytorch/huggingface/Ziya-LLaMA-13B-v1" -lm_model = LlamaForCausalLM.from_pretrained( - LM_MODEL_PATH, - device_map="auto", - torch_dtype=torch.float16, - use_auth_token=AUTH_TOKEN, - quantization_config=BitsAndBytesConfig(load_in_4bit=True)) - -TOKENIZER_PATH = "IDEA-CCNL/Ziya-LLaMA-13B-v1" -# TOKENIZER_PATH = "/cognitive_comp/wuxiaojun/pretrained/pytorch/huggingface/Ziya-LLaMA-13B-v1" -# tokenizer = LlamaTokenizer.from_pretrained(LM_MODEL_PATH, use_auth_token=AUTH_TOKEN) -tokenizer = LlamaTokenizer.from_pretrained(TOKENIZER_PATH) - -# visual model -OPENAI_CLIP_MEAN = [0.48145466, 0.4578275, 0.40821073] -OPENAI_CLIP_STD = [0.26862954, 0.26130258, 0.27577711] -# demo.py is in the project path, so we can use local path ".". Otherwise you should use "IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1" -visual_model_path = "IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1" -# visual_model_path = "/cognitive_comp/wuxiaojun/pretrained/pytorch/huggingface/Ziya-BLIP2-14B-Visual-v1" -model = AutoModelForCausalLM.from_pretrained( - visual_model_path, - trust_remote_code=True, use_auth_token=AUTH_TOKEN, - torch_dtype=torch.float16) -model.cuda() # if you use on cpu, comment this line -model.language_model = lm_model -image_size = model.config.vision_config.image_size -image_processor = BlipImageProcessor( - size={"height": image_size, "width": image_size}, - image_mean=OPENAI_CLIP_MEAN, - image_std=OPENAI_CLIP_STD, -) - -def post( - input_text, - temperature, - top_p, - image_prompt, - result_previous, - hidden_image - ): - result_text = [(ele[0], ele[1]) for ele in result_previous] - previous_querys = [] - previous_outputs = [] - for i in range(len(result_text)-1, -1, -1): - if result_text[i][0] == "": - del result_text[i] - else: - previous_querys.append(result_text[i][0]) - previous_outputs.append(result_text[i][1]) - - is_zh = is_chinese(input_text) - - if image_prompt is None: - print("Image empty") - if is_zh: - result_text.append((input_text, '图片为空!请上传图片并重试。')) - else: - result_text.append((input_text, 'Image empty! Please upload a image and retry.')) - return input_text, result_text, hidden_image - elif input_text == "": - print("Text empty") - result_text.append((input_text, 'Text empty! Please enter text and retry.')) - return "", result_text, hidden_image - - generate_config = { - "max_new_tokens": 128, - "top_p": top_p, - "temperature": temperature, - "repetition_penalty": 1.18, - } - img = Image.open(image_prompt) - pixel_values = image_processor( - img, - return_tensors="pt").pixel_values.to( - model.device).to(model.dtype) - output_buffer = BytesIO() - img.save(output_buffer, "PNG") - byte_data = output_buffer.getvalue() - md = hashlib.md5() - md.update(byte_data) - img_hash = md.hexdigest() - if img_hash != hidden_image: - previous_querys = [] - previous_outputs = [] - result_text = [] - - answer = model.chat( - tokenizer=tokenizer, - pixel_values=pixel_values, - query=input_text, - previous_querys=previous_querys, - previous_outputs=previous_outputs, - **generate_config, - ) - - result_text.append((input_text, answer)) - print(result_text) - return "", result_text, img_hash - - -def clear_fn(value): - return "", default_chatbox, None - -def clear_fn2(value): - return default_chatbox - -def io_fn(a, b, c): - print(f"call io_fn") - return a, b - - -def change_language(value): - if value == "Change hint to English": - return "提示变为中文", MAINTENANCE_NOTICE1 - else: - return "Change hint to English", MAINTENANCE_NOTICE2 - - -def main(): - gr.close_all() - examples = [] - with open("./examples/example_inputs.jsonl") as f: - for line in f: - data = json.loads(line) - examples.append(data) - - - with gr.Blocks(css='style.css') as demo: - - with gr.Row(): - with gr.Column(scale=4.5): - with gr.Group(): - input_text = gr.Textbox(label='Input Text', placeholder='Please enter text prompt below and press ENTER.') - with gr.Row(): - run_button = gr.Button('Generate') - clear_button = gr.Button('Clear') - - image_prompt = gr.Image(type="filepath", label="Image Prompt", value=None) - with gr.Row(): - temperature = gr.Slider(maximum=1, value=0.7, minimum=0, label='Temperature') - top_p = gr.Slider(maximum=1, value=0.1, minimum=0, label='Top P') - with gr.Group(): - with gr.Row(): - with gr.Column(scale=7): - maintenance_notice = gr.Markdown(MAINTENANCE_NOTICE1) - with gr.Column(scale=2): - change_button = gr.Button('Change hint to English', visible=False) - with gr.Column(scale=5.5): - result_text = gr.components.Chatbot(label='Multi-round conversation History', value=[]).style(height=550) - hidden_image_hash = gr.Textbox(visible=False) - - gr_examples = gr.Examples(examples=[[example["text"], example["image"]] for example in examples], - inputs=[input_text, image_prompt], - label="Example Inputs (Click to insert an examplet into the input box)", - examples_per_page=3) - - gr.Markdown(NOTES) - - print(gr.__version__) - run_button.click(fn=post,inputs=[input_text, temperature, top_p, image_prompt, result_text, hidden_image_hash], - outputs=[input_text, result_text, hidden_image_hash]) - input_text.submit(fn=post,inputs=[input_text, temperature, top_p, image_prompt, result_text, hidden_image_hash], - outputs=[input_text, result_text, hidden_image_hash]) - clear_button.click(fn=clear_fn, inputs=clear_button, outputs=[input_text, result_text, image_prompt]) - image_prompt.upload(fn=clear_fn2, inputs=clear_button, outputs=[result_text]) - image_prompt.clear(fn=clear_fn2, inputs=clear_button, outputs=[result_text]) - - print(gr.__version__) - - demo.queue(concurrency_count=10) - demo.launch(server_name="0.0.0.0") - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/Intae/deepfake/training/zoo/classifiers.py b/spaces/Intae/deepfake/training/zoo/classifiers.py deleted file mode 100644 index f5899c3ee9d71d3f9ea7ad31c53ce6ed3f9c7e2c..0000000000000000000000000000000000000000 --- a/spaces/Intae/deepfake/training/zoo/classifiers.py +++ /dev/null @@ -1,172 +0,0 @@ -from functools import partial - -import numpy as np -import torch -from timm.models.efficientnet import tf_efficientnet_b4_ns, tf_efficientnet_b3_ns, \ - tf_efficientnet_b5_ns, tf_efficientnet_b2_ns, tf_efficientnet_b6_ns, tf_efficientnet_b7_ns -from torch import nn -from torch.nn.modules.dropout import Dropout -from torch.nn.modules.linear import Linear -from torch.nn.modules.pooling import AdaptiveAvgPool2d - -encoder_params = { - "tf_efficientnet_b3_ns": { - "features": 1536, - "init_op": partial(tf_efficientnet_b3_ns, pretrained=True, drop_path_rate=0.2) - }, - "tf_efficientnet_b2_ns": { - "features": 1408, - "init_op": partial(tf_efficientnet_b2_ns, pretrained=False, drop_path_rate=0.2) - }, - "tf_efficientnet_b4_ns": { - "features": 1792, - "init_op": partial(tf_efficientnet_b4_ns, pretrained=True, drop_path_rate=0.5) - }, - "tf_efficientnet_b5_ns": { - "features": 2048, - "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.2) - }, - "tf_efficientnet_b4_ns_03d": { - "features": 1792, - "init_op": partial(tf_efficientnet_b4_ns, pretrained=True, drop_path_rate=0.3) - }, - "tf_efficientnet_b5_ns_03d": { - "features": 2048, - "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.3) - }, - "tf_efficientnet_b5_ns_04d": { - "features": 2048, - "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.4) - }, - "tf_efficientnet_b6_ns": { - "features": 2304, - "init_op": partial(tf_efficientnet_b6_ns, pretrained=True, drop_path_rate=0.2) - }, - "tf_efficientnet_b7_ns": { - "features": 2560, - "init_op": partial(tf_efficientnet_b7_ns, pretrained=True, drop_path_rate=0.2) - }, - "tf_efficientnet_b6_ns_04d": { - "features": 2304, - "init_op": partial(tf_efficientnet_b6_ns, pretrained=True, drop_path_rate=0.4) - }, -} - - -def setup_srm_weights(input_channels: int = 3) -> torch.Tensor: - """Creates the SRM kernels for noise analysis.""" - # note: values taken from Zhou et al., "Learning Rich Features for Image Manipulation Detection", CVPR2018 - srm_kernel = torch.from_numpy(np.array([ - [ # srm 1/2 horiz - [0., 0., 0., 0., 0.], # noqa: E241,E201 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - [0., 1., -2., 1., 0.], # noqa: E241,E201 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - ], [ # srm 1/4 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - [0., -1., 2., -1., 0.], # noqa: E241,E201 - [0., 2., -4., 2., 0.], # noqa: E241,E201 - [0., -1., 2., -1., 0.], # noqa: E241,E201 - [0., 0., 0., 0., 0.], # noqa: E241,E201 - ], [ # srm 1/12 - [-1., 2., -2., 2., -1.], # noqa: E241,E201 - [2., -6., 8., -6., 2.], # noqa: E241,E201 - [-2., 8., -12., 8., -2.], # noqa: E241,E201 - [2., -6., 8., -6., 2.], # noqa: E241,E201 - [-1., 2., -2., 2., -1.], # noqa: E241,E201 - ] - ])).float() - srm_kernel[0] /= 2 - srm_kernel[1] /= 4 - srm_kernel[2] /= 12 - return srm_kernel.view(3, 1, 5, 5).repeat(1, input_channels, 1, 1) - - -def setup_srm_layer(input_channels: int = 3) -> torch.nn.Module: - """Creates a SRM convolution layer for noise analysis.""" - weights = setup_srm_weights(input_channels) - conv = torch.nn.Conv2d(input_channels, out_channels=3, kernel_size=5, stride=1, padding=2, bias=False) - with torch.no_grad(): - conv.weight = torch.nn.Parameter(weights, requires_grad=False) - return conv - - -class DeepFakeClassifierSRM(nn.Module): - def __init__(self, encoder, dropout_rate=0.5) -> None: - super().__init__() - self.encoder = encoder_params[encoder]["init_op"]() - self.avg_pool = AdaptiveAvgPool2d((1, 1)) - self.srm_conv = setup_srm_layer(3) - self.dropout = Dropout(dropout_rate) - self.fc = Linear(encoder_params[encoder]["features"], 1) - - def forward(self, x): - noise = self.srm_conv(x) - x = self.encoder.forward_features(noise) - x = self.avg_pool(x).flatten(1) - x = self.dropout(x) - x = self.fc(x) - return x - - -class GlobalWeightedAvgPool2d(nn.Module): - """ - Global Weighted Average Pooling from paper "Global Weighted Average - Pooling Bridges Pixel-level Localization and Image-level Classification" - """ - - def __init__(self, features: int, flatten=False): - super().__init__() - self.conv = nn.Conv2d(features, 1, kernel_size=1, bias=True) - self.flatten = flatten - - def fscore(self, x): - m = self.conv(x) - m = m.sigmoid().exp() - return m - - def norm(self, x: torch.Tensor): - return x / x.sum(dim=[2, 3], keepdim=True) - - def forward(self, x): - input_x = x - x = self.fscore(x) - x = self.norm(x) - x = x * input_x - x = x.sum(dim=[2, 3], keepdim=not self.flatten) - return x - - -class DeepFakeClassifier(nn.Module): - def __init__(self, encoder, dropout_rate=0.0) -> None: - super().__init__() - self.encoder = encoder_params[encoder]["init_op"]() - self.avg_pool = AdaptiveAvgPool2d((1, 1)) - self.dropout = Dropout(dropout_rate) - self.fc = Linear(encoder_params[encoder]["features"], 1) - - def forward(self, x): - x = self.encoder.forward_features(x) - x = self.avg_pool(x).flatten(1) - x = self.dropout(x) - x = self.fc(x) - return x - - - - -class DeepFakeClassifierGWAP(nn.Module): - def __init__(self, encoder, dropout_rate=0.5) -> None: - super().__init__() - self.encoder = encoder_params[encoder]["init_op"]() - self.avg_pool = GlobalWeightedAvgPool2d(encoder_params[encoder]["features"]) - self.dropout = Dropout(dropout_rate) - self.fc = Linear(encoder_params[encoder]["features"], 1) - - def forward(self, x): - x = self.encoder.forward_features(x) - x = self.avg_pool(x).flatten(1) - x = self.dropout(x) - x = self.fc(x) - return x \ No newline at end of file diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/data/hardcoded_questions.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/data/hardcoded_questions.py deleted file mode 100644 index 7abaff176932b6d914781ff602ba782aa765cb17..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/data/hardcoded_questions.py +++ /dev/null @@ -1,162 +0,0 @@ -import json - - -def identity_questions(): - """ " - Adopted from https://github.com/young-geng/koala_data_pipeline/blob/main/process_hard_coded_data.py - """ - content = [] - - name = "Vicuna" - org = "Large Model Systems Organization (LMSYS)" - - def generate_conversations(questions, answers): - for q in questions: - for a in answers: - content.append( - { - "id": f"identity_{len(content)}", - "conversations": [ - {"from": "human", "value": q}, - {"from": "gpt", "value": a}, - ], - } - ) - - questions = [ - "Who are you?", - "What is your name?", - "Can you introduce yourself?", - "What's your name?", - "What are you called?", - "What are you?", - "Tell me your name.", - "Tell me about yourself.", - "Tell me about you.", - "Tell me who you are.", - ] - answers = [ - f"I am {name}, a language model trained by researchers from {org}.", - f"My name is {name}, and I'm a language model developed by {org}.", - f"You can call me {name}, and I was trained by {org} researchers as a language model.", - f"As a language model, I go by the name {name} and was trained by researchers from {org}.", - f"I'm a language model called {name}, and I was trained by {org} researchers.", - ] - generate_conversations(questions, answers) - - questions = [ - "Who created you?", - "Who made you?", - "Who built you?", - "Who programmed you?", - "Who trained you?", - "Who taught you?", - "Who developed you?", - ] - answers = [ - f"Researchers from {org} created me.", - f"I'm created by {org}.", - f"I'm built by researchers from {org}.", - f"I am a language model trained by researchers from {org}.", - f"I'm a language model developed by {org}.", - f"I'm a language model created by researchers from {org}.", - f"My creators are researchers from {org}.", - ] - generate_conversations(questions, answers) - - questions = [ - "Are you ChatGPT?", - "Are you GPT-2?", - "Are you GPT-3?", - "Are you GPT-4?", - "Are you davinci?", - "Are you davinci-001?", - "Are you davinci-002?", - "Are you davinci-003?", - "Are you curie?", - "Are you based on ChatGPT?", - "Are you based on GPT-2?", - "Are you based on GPT-3?", - "Are you based on GPT-4?", - "Are you based on davinci?", - "Are you based on davinci-001?", - "Are you based on davinci-002?", - "Are you based on davinci-003?", - "Are you based on curie?", - "Are you trained by OpenAI?", - "Are you trained by Google?", - "Are you trained by Microsoft?", - "Are you trained by Meta?", - "Are you trained by IBM?", - "Do you call OpenAI APIs?", - "Do you call Google APIs?", - "Do you call Microsoft APIs?", - "Do you call Meta APIs?", - "Do you call IBM APIs?", - "Are you created by OpenAI?", - "Are you created by Google?", - "Are you created by Microsoft?", - "Are you created by Meta?", - "Are you created by IBM?", - "Are you developed by OpenAI?", - "Are you developed by Google?", - "Are you developed by Microsoft?", - "Are you developed by Meta?", - "Are you developed by IBM?", - "Are you trained on OpenAI data?", - "Are you trained on Google data?", - "Are you trained on Microsoft data?", - "Are you trained on Meta data?", - "Are you trained on IBM data?", - "Are you trained with OpenAI data?", - "Are you trained with Google data?", - "Are you trained with Microsoft data?", - "Are you trained with Meta data?", - "Are you trained with IBM data?", - "Have you been trained with OpenAI data?", - "Have you been trained with Google data?", - "Have you been trained with Microsoft data?", - "Have you been trained with Meta data?", - "Have you been trained with IBM data?", - "Are you finetuned on OpenAI data?", - "Are you finetuned on Google data?", - "Are you finetuned on Microsoft data?", - "Are you finetuned on Meta data?", - "Are you finetuned on IBM data?", - "Are you finetuned with OpenAI data?", - "Are you finetuned with Google data?", - "Are you finetuned with Microsoft data?", - "Are you finetuned with Meta data?", - "Are you finetuned with IBM data?", - "Have you been finetuned with OpenAI data?", - "Have you been finetuned with Google data?", - "Have you been finetuned with Microsoft data?", - "Have you been finetuned with Meta data?", - "Have you been finetuned with IBM data?", - ] - answers = [ - f"No, I am a language model trained by researchers from {org}.", - f"No, I am a language model developed by researchers from {org}.", - f"No, I am a language model created by researchers from {org}.", - f"No, I am trained by researchers from {org}.", - f"No, I am developed by researchers from {org}.", - f"No, I am created by researchers from {org}.", - f"No, I'm a language model trained by researchers from {org}.", - f"No, I'm a language model developed by researchers from {org}.", - f"No, I'm a language model created by researchers from {org}.", - f"No, I'm trained by researchers from {org}.", - f"No, I'm developed by researchers from {org}.", - f"No, I'm created by researchers from {org}.", - ] - generate_conversations(questions, answers) - - return content - - -if __name__ == "__main__": - out_file = "hardcoded.json" - - content = [] - content.extend(identity_questions()) - - json.dump(content, open(out_file, "w"), indent=2) diff --git a/spaces/Jackflack09/diffuse-custom/Waifu2x/utils/cls.py b/spaces/Jackflack09/diffuse-custom/Waifu2x/utils/cls.py deleted file mode 100644 index c153c42455dc4d6b9c4a532edaac0aa8f4dcca1d..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/Waifu2x/utils/cls.py +++ /dev/null @@ -1,157 +0,0 @@ -# This code is copied from https://github.com/thomasjpfan/pytorch/blob/401ec389db2c9d2978917a6e4d1101b20340d7e7/torch/optim/lr_scheduler.py - - -# This code is under review at PyTorch and is to be merged eventually to make CLR available to all. -# Tested with pytorch 0.2.0 - -import numpy as np - - -class CyclicLR(object): - """Sets the learning rate of each parameter group according to - cyclical learning rate policy (CLR). The policy cycles the learning - rate between two boundaries with a constant frequency, as detailed in - the paper `Cyclical Learning Rates for Training Neural Networks`_. - The distance between the two boundaries can be scaled on a per-iteration - or per-cycle basis. - Cyclical learning rate policy changes the learning rate after every batch. - `batch_step` should be called after a batch has been used for training. - To resume training, save `last_batch_iteration` and use it to instantiate `CycleLR`. - This class has three built-in policies, as put forth in the paper: - "triangular": - A basic triangular cycle w/ no amplitude scaling. - "triangular2": - A basic triangular cycle that scales initial amplitude by half each cycle. - "exp_range": - A cycle that scales initial amplitude by gamma**(cycle iterations) at each - cycle iteration. - This implementation was adapted from the github repo: `bckenstler/CLR`_ - Args: - optimizer (Optimizer): Wrapped optimizer. - base_lr (float or list): Initial learning rate which is the - lower boundary in the cycle for eachparam groups. - Default: 0.001 - max_lr (float or list): Upper boundaries in the cycle for - each parameter group. Functionally, - it defines the cycle amplitude (max_lr - base_lr). - The lr at any cycle is the sum of base_lr - and some scaling of the amplitude; therefore - max_lr may not actually be reached depending on - scaling function. Default: 0.006 - step_size (int): Number of training iterations per - half cycle. Authors suggest setting step_size - 2-8 x training iterations in epoch. Default: 2000 - mode (str): One of {triangular, triangular2, exp_range}. - Values correspond to policies detailed above. - If scale_fn is not None, this argument is ignored. - Default: 'triangular' - gamma (float): Constant in 'exp_range' scaling function: - gamma**(cycle iterations) - Default: 1.0 - scale_fn (function): Custom scaling policy defined by a single - argument lambda function, where - 0 <= scale_fn(x) <= 1 for all x >= 0. - mode paramater is ignored - Default: None - scale_mode (str): {'cycle', 'iterations'}. - Defines whether scale_fn is evaluated on - cycle number or cycle iterations (training - iterations since start of cycle). - Default: 'cycle' - last_batch_iteration (int): The index of the last batch. Default: -1 - Example: - >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) - >>> scheduler = torch.optim.CyclicLR(optimizer) - >>> data_loader = torch.utils.data.DataLoader(...) - >>> for epoch in range(10): - >>> for batch in data_loader: - >>> scheduler.batch_step() - >>> train_batch(...) - .. _Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186 - .. _bckenstler/CLR: https://github.com/bckenstler/CLR - """ - - def __init__(self, optimizer, base_lr=1e-3, max_lr=6e-3, - step_size=2000, mode='triangular', gamma=1., - scale_fn=None, scale_mode='cycle', last_batch_iteration=-1): - - # if not isinstance(optimizer, Optimizer): - # raise TypeError('{} is not an Optimizer'.format( - # type(optimizer).__name__)) - self.optimizer = optimizer - - if isinstance(base_lr, list) or isinstance(base_lr, tuple): - if len(base_lr) != len(optimizer.param_groups): - raise ValueError("expected {} base_lr, got {}".format( - len(optimizer.param_groups), len(base_lr))) - self.base_lrs = list(base_lr) - else: - self.base_lrs = [base_lr] * len(optimizer.param_groups) - - if isinstance(max_lr, list) or isinstance(max_lr, tuple): - if len(max_lr) != len(optimizer.param_groups): - raise ValueError("expected {} max_lr, got {}".format( - len(optimizer.param_groups), len(max_lr))) - self.max_lrs = list(max_lr) - else: - self.max_lrs = [max_lr] * len(optimizer.param_groups) - - self.step_size = step_size - - if mode not in ['triangular', 'triangular2', 'exp_range'] \ - and scale_fn is None: - raise ValueError('mode is invalid and scale_fn is None') - - self.mode = mode - self.gamma = gamma - self.current_lr = None - - if scale_fn is None: - if self.mode == 'triangular': - self.scale_fn = self._triangular_scale_fn - self.scale_mode = 'cycle' - elif self.mode == 'triangular2': - self.scale_fn = self._triangular2_scale_fn - self.scale_mode = 'cycle' - elif self.mode == 'exp_range': - self.scale_fn = self._exp_range_scale_fn - self.scale_mode = 'iterations' - else: - self.scale_fn = scale_fn - self.scale_mode = scale_mode - - self.batch_step(last_batch_iteration + 1) - self.last_batch_iteration = last_batch_iteration - - def batch_step(self, batch_iteration=None): - if batch_iteration is None: - batch_iteration = self.last_batch_iteration + 1 - self.last_batch_iteration = batch_iteration - for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()): - param_group['lr'] = lr - self.current_lr = lr - - def _triangular_scale_fn(self, x): - return 1. - - def _triangular2_scale_fn(self, x): - return 1 / (2. ** (x - 1)) - - def _exp_range_scale_fn(self, x): - return self.gamma ** (x) - - def get_lr(self): - step_size = float(self.step_size) - cycle = np.floor(1 + self.last_batch_iteration / (2 * step_size)) - x = np.abs(self.last_batch_iteration / step_size - 2 * cycle + 1) - - lrs = [] - param_lrs = zip(self.optimizer.param_groups, self.base_lrs, self.max_lrs) - for param_group, base_lr, max_lr in param_lrs: - base_height = (max_lr - base_lr) * np.maximum(0, (1 - x)) - if self.scale_mode == 'cycle': - lr = base_lr + base_height * self.scale_fn(cycle) - else: - lr = base_lr + base_height * self.scale_fn(self.last_batch_iteration) - lrs.append(lr) - return lrs diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/README.md b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/README.md deleted file mode 100644 index 863c931605571fb06e41fbbc4c443591aa1ec4cf..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯🌡️ -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.41.2 -app_file: ChuanhuChatbot.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Jonni/04-Gradio_SOTA/qasrl_model_pipeline.py b/spaces/Jonni/04-Gradio_SOTA/qasrl_model_pipeline.py deleted file mode 100644 index 50135f76849bc8537fcae83b72532da661487da6..0000000000000000000000000000000000000000 --- a/spaces/Jonni/04-Gradio_SOTA/qasrl_model_pipeline.py +++ /dev/null @@ -1,183 +0,0 @@ -from typing import Optional -import json -from argparse import Namespace -from pathlib import Path -from transformers import Text2TextGenerationPipeline, AutoModelForSeq2SeqLM, AutoTokenizer - -def get_markers_for_model(is_t5_model: bool) -> Namespace: - special_tokens_constants = Namespace() - if is_t5_model: - # T5 model have 100 special tokens by default - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - - else: - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - return special_tokens_constants - -def load_trained_model(name_or_path): - import huggingface_hub as HFhub - tokenizer = AutoTokenizer.from_pretrained(name_or_path) - model = AutoModelForSeq2SeqLM.from_pretrained(name_or_path) - # load preprocessing_kwargs from the model repo on HF hub, or from the local model directory - kwargs_filename = None - if name_or_path.startswith("kleinay/"): # and 'preprocessing_kwargs.json' in HFhub.list_repo_files(name_or_path): # the supported version of HFhub doesn't support list_repo_files - kwargs_filename = HFhub.hf_hub_download(repo_id=name_or_path, filename="preprocessing_kwargs.json") - elif Path(name_or_path).is_dir() and (Path(name_or_path) / "experiment_kwargs.json").exists(): - kwargs_filename = Path(name_or_path) / "experiment_kwargs.json" - - if kwargs_filename: - preprocessing_kwargs = json.load(open(kwargs_filename)) - # integrate into model.config (for decoding args, e.g. "num_beams"), and save also as standalone object for preprocessing - model.config.preprocessing_kwargs = Namespace(**preprocessing_kwargs) - model.config.update(preprocessing_kwargs) - return model, tokenizer - - -class QASRL_Pipeline(Text2TextGenerationPipeline): - def __init__(self, model_repo: str, **kwargs): - model, tokenizer = load_trained_model(model_repo) - super().__init__(model, tokenizer, framework="pt") - self.is_t5_model = "t5" in model.config.model_type - self.special_tokens = get_markers_for_model(self.is_t5_model) - self.data_args = model.config.preprocessing_kwargs - # backward compatibility - default keyword values implemeted in `run_summarization`, thus not saved in `preprocessing_kwargs` - if "predicate_marker_type" not in vars(self.data_args): - self.data_args.predicate_marker_type = "generic" - if "use_bilateral_predicate_marker" not in vars(self.data_args): - self.data_args.use_bilateral_predicate_marker = True - if "append_verb_form" not in vars(self.data_args): - self.data_args.append_verb_form = True - self._update_config(**kwargs) - - def _update_config(self, **kwargs): - " Update self.model.config with initialization parameters and necessary defaults. " - # set default values that will always override model.config, but can overriden by __init__ kwargs - kwargs["max_length"] = kwargs.get("max_length", 80) - # override model.config with kwargs - for k,v in kwargs.items(): - self.model.config.__dict__[k] = v - - def _sanitize_parameters(self, **kwargs): - preprocess_kwargs, forward_kwargs, postprocess_kwargs = {}, {}, {} - if "predicate_marker" in kwargs: - preprocess_kwargs["predicate_marker"] = kwargs["predicate_marker"] - if "predicate_type" in kwargs: - preprocess_kwargs["predicate_type"] = kwargs["predicate_type"] - if "verb_form" in kwargs: - preprocess_kwargs["verb_form"] = kwargs["verb_form"] - return preprocess_kwargs, forward_kwargs, postprocess_kwargs - - def preprocess(self, inputs, predicate_marker="", predicate_type=None, verb_form=None): - # Here, inputs is string or list of strings; apply string postprocessing - if isinstance(inputs, str): - processed_inputs = self._preprocess_string(inputs, predicate_marker, predicate_type, verb_form) - elif hasattr(inputs, "__iter__"): - processed_inputs = [self._preprocess_string(s, predicate_marker, predicate_type, verb_form) for s in inputs] - else: - raise ValueError("inputs must be str or Iterable[str]") - # Now pass to super.preprocess for tokenization - return super().preprocess(processed_inputs) - - def _preprocess_string(self, seq: str, predicate_marker: str, predicate_type: Optional[str], verb_form: Optional[str]) -> str: - sent_tokens = seq.split(" ") - assert predicate_marker in sent_tokens, f"Input sentence must include a predicate-marker token ('{predicate_marker}') before the target predicate word" - predicate_idx = sent_tokens.index(predicate_marker) - sent_tokens.remove(predicate_marker) - sentence_before_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx)]) - predicate = sent_tokens[predicate_idx] - sentence_after_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx+1, len(sent_tokens))]) - - if self.data_args.predicate_marker_type == "generic": - predicate_marker = self.special_tokens.predicate_generic_marker - # In case we want special marker for each predicate type: """ - elif self.data_args.predicate_marker_type == "pred_type": - assert predicate_type is not None, "For this model, you must provide the `predicate_type` either when initializing QASRL_Pipeline(...) or when applying __call__(...) on it" - assert predicate_type in ("verbal", "nominal"), f"`predicate_type` must be either 'verbal' or 'nominal'; got '{predicate_type}'" - predicate_marker = {"verbal": self.special_tokens.predicate_verb_marker , - "nominal": self.special_tokens.predicate_nominalization_marker - }[predicate_type] - - if self.data_args.use_bilateral_predicate_marker: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {predicate_marker} {sentence_after_predicate}" - else: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {sentence_after_predicate}" - - # embed also verb_form - if self.data_args.append_verb_form and verb_form is None: - raise ValueError(f"For this model, you must provide the `verb_form` of the predicate when applying __call__(...)") - elif self.data_args.append_verb_form: - seq = f"{seq} {self.special_tokens.separator_input_question_predicate} {verb_form} " - else: - seq = f"{seq} " - - # append source prefix (for t5 models) - prefix = self._get_source_prefix(predicate_type) - - return prefix + seq - - def _get_source_prefix(self, predicate_type: Optional[str]): - if not self.is_t5_model or self.data_args.source_prefix is None: - return '' - if not self.data_args.source_prefix.startswith("<"): # Regular prefix - not dependent on input row x - return self.data_args.source_prefix - if self.data_args.source_prefix == "": - if predicate_type is None: - raise ValueError("source_prefix is '' but input no `predicate_type`.") - else: - return f"Generate QAs for {predicate_type} QASRL: " - - def _forward(self, *args, **kwargs): - outputs = super()._forward(*args, **kwargs) - return outputs - - - def postprocess(self, model_outputs): - output_seq = self.tokenizer.decode( - model_outputs["output_ids"].squeeze(), - skip_special_tokens=False, - clean_up_tokenization_spaces=False, - ) - output_seq = output_seq.strip(self.tokenizer.pad_token).strip(self.tokenizer.eos_token).strip() - qa_subseqs = output_seq.split(self.special_tokens.separator_output_pairs) - qas = [self._postrocess_qa(qa_subseq) for qa_subseq in qa_subseqs] - return {"generated_text": output_seq, - "QAs": qas} - - def _postrocess_qa(self, seq: str) -> str: - # split question and answers - if self.special_tokens.separator_output_question_answer in seq: - question, answer = seq.split(self.special_tokens.separator_output_question_answer)[:2] - else: - print("invalid format: no separator between question and answer found...") - return None - # question, answer = seq, '' # Or: backoff to only question - # skip "_" slots in questions - question = ' '.join(t for t in question.split(' ') if t != '_') - answers = [a.strip() for a in answer.split(self.special_tokens.separator_output_answers)] - return {"question": question, "answers": answers} - - -if __name__ == "__main__": - pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline") - res1 = pipe("The student was interested in Luke 's research about sea animals .", verb_form="research", predicate_type="nominal") - res2 = pipe(["The doctor was interested in Luke 's treatment .", - "The Veterinary student was interested in Luke 's treatment of sea animals ."], verb_form="treat", predicate_type="nominal", num_beams=10) - res3 = pipe("A number of professions have developed that specialize in the treatment of mental disorders .", verb_form="develop", predicate_type="verbal") - print(res1) - print(res2) - print(res3) - \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/tools/calc_rvc_model_similarity.py b/spaces/Kangarroar/ApplioRVC-Inference/tools/calc_rvc_model_similarity.py deleted file mode 100644 index 42496e088e51dc5162d0714470c2226f696e260c..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/tools/calc_rvc_model_similarity.py +++ /dev/null @@ -1,96 +0,0 @@ -# This code references https://huggingface.co/JosephusCheung/ASimilarityCalculatior/blob/main/qwerty.py -# Fill in the path of the model to be queried and the root directory of the reference models, and this script will return the similarity between the model to be queried and all reference models. -import os -import logging - -logger = logging.getLogger(__name__) - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def cal_cross_attn(to_q, to_k, to_v, rand_input): - hidden_dim, embed_dim = to_q.shape - attn_to_q = nn.Linear(hidden_dim, embed_dim, bias=False) - attn_to_k = nn.Linear(hidden_dim, embed_dim, bias=False) - attn_to_v = nn.Linear(hidden_dim, embed_dim, bias=False) - attn_to_q.load_state_dict({"weight": to_q}) - attn_to_k.load_state_dict({"weight": to_k}) - attn_to_v.load_state_dict({"weight": to_v}) - - return torch.einsum( - "ik, jk -> ik", - F.softmax( - torch.einsum("ij, kj -> ik", attn_to_q(rand_input), attn_to_k(rand_input)), - dim=-1, - ), - attn_to_v(rand_input), - ) - - -def model_hash(filename): - try: - with open(filename, "rb") as file: - import hashlib - - m = hashlib.sha256() - - file.seek(0x100000) - m.update(file.read(0x10000)) - return m.hexdigest()[0:8] - except FileNotFoundError: - return "NOFILE" - - -def eval(model, n, input): - qk = f"enc_p.encoder.attn_layers.{n}.conv_q.weight" - uk = f"enc_p.encoder.attn_layers.{n}.conv_k.weight" - vk = f"enc_p.encoder.attn_layers.{n}.conv_v.weight" - atoq, atok, atov = model[qk][:, :, 0], model[uk][:, :, 0], model[vk][:, :, 0] - - attn = cal_cross_attn(atoq, atok, atov, input) - return attn - - -def main(path, root): - torch.manual_seed(114514) - model_a = torch.load(path, map_location="cpu")["weight"] - - logger.info("Query:\t\t%s\t%s" % (path, model_hash(path))) - - map_attn_a = {} - map_rand_input = {} - for n in range(6): - hidden_dim, embed_dim, _ = model_a[ - f"enc_p.encoder.attn_layers.{n}.conv_v.weight" - ].shape - rand_input = torch.randn([embed_dim, hidden_dim]) - - map_attn_a[n] = eval(model_a, n, rand_input) - map_rand_input[n] = rand_input - - del model_a - - for name in sorted(list(os.listdir(root))): - path = "%s/%s" % (root, name) - model_b = torch.load(path, map_location="cpu")["weight"] - - sims = [] - for n in range(6): - attn_a = map_attn_a[n] - attn_b = eval(model_b, n, map_rand_input[n]) - - sim = torch.mean(torch.cosine_similarity(attn_a, attn_b)) - sims.append(sim) - - logger.info( - "Reference:\t%s\t%s\t%s" - % (path, model_hash(path), f"{torch.mean(torch.stack(sims)) * 1e2:.2f}%") - ) - - -if __name__ == "__main__": - query_path = r"assets\weights\mi v3.pth" - reference_root = r"assets\weights" - main(query_path, reference_root) diff --git a/spaces/KenjieDec/RemBG/rembg/sessions/u2netp.py b/spaces/KenjieDec/RemBG/rembg/sessions/u2netp.py deleted file mode 100644 index b28fb6adaf472268ff3f713cc11f6e75c0a39e69..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/RemBG/rembg/sessions/u2netp.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -from typing import List - -import numpy as np -import pooch -from PIL import Image -from PIL.Image import Image as PILImage - -from .base import BaseSession - - -class U2netpSession(BaseSession): - def predict(self, img: PILImage, *args, **kwargs) -> List[PILImage]: - ort_outs = self.inner_session.run( - None, - self.normalize( - img, (0.485, 0.456, 0.406), (0.229, 0.224, 0.225), (320, 320) - ), - ) - - pred = ort_outs[0][:, 0, :, :] - - ma = np.max(pred) - mi = np.min(pred) - - pred = (pred - mi) / (ma - mi) - pred = np.squeeze(pred) - - mask = Image.fromarray((pred * 255).astype("uint8"), mode="L") - mask = mask.resize(img.size, Image.LANCZOS) - - return [mask] - - @classmethod - def download_models(cls, *args, **kwargs): - fname = f"{cls.name()}.onnx" - pooch.retrieve( - "https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2netp.onnx", - None - if cls.checksum_disabled(*args, **kwargs) - else "md5:8e83ca70e441ab06c318d82300c84806", - fname=fname, - path=cls.u2net_home(*args, **kwargs), - progressbar=True, - ) - - return os.path.join(cls.u2net_home(), fname) - - @classmethod - def name(cls, *args, **kwargs): - return "u2netp" diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/train_linglf02mel_seq2seq_oneshotvc.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/train_linglf02mel_seq2seq_oneshotvc.py deleted file mode 100644 index daf1c6a00d7fe9d0e7ef319b980f92a07bbd6774..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/train_linglf02mel_seq2seq_oneshotvc.py +++ /dev/null @@ -1,288 +0,0 @@ -import os, sys -# sys.path.append('/home/shaunxliu/projects/nnsp') -import matplotlib -matplotlib.use("Agg") -import matplotlib.pyplot as plt -from matplotlib.ticker import MaxNLocator -import torch -from torch.utils.data import DataLoader -import numpy as np -from .solver import BaseSolver -from utils.data_load import OneshotVcDataset, MultiSpkVcCollate -# from src.rnn_ppg2mel import BiRnnPpg2MelModel -# from src.mel_decoder_mol_encAddlf0 import MelDecoderMOL -from .loss import MaskedMSELoss -from .optim import Optimizer -from utils.util import human_format -from ppg2mel import MelDecoderMOLv2 - - -class Solver(BaseSolver): - """Customized Solver.""" - def __init__(self, config, paras, mode): - super().__init__(config, paras, mode) - self.num_att_plots = 5 - self.att_ws_dir = f"{self.logdir}/att_ws" - os.makedirs(self.att_ws_dir, exist_ok=True) - self.best_loss = np.inf - - def fetch_data(self, data): - """Move data to device""" - data = [i.to(self.device) for i in data] - return data - - def load_data(self): - """ Load data for training/validation/plotting.""" - train_dataset = OneshotVcDataset( - meta_file=self.config.data.train_fid_list, - vctk_ppg_dir=self.config.data.vctk_ppg_dir, - libri_ppg_dir=self.config.data.libri_ppg_dir, - vctk_f0_dir=self.config.data.vctk_f0_dir, - libri_f0_dir=self.config.data.libri_f0_dir, - vctk_wav_dir=self.config.data.vctk_wav_dir, - libri_wav_dir=self.config.data.libri_wav_dir, - vctk_spk_dvec_dir=self.config.data.vctk_spk_dvec_dir, - libri_spk_dvec_dir=self.config.data.libri_spk_dvec_dir, - ppg_file_ext=self.config.data.ppg_file_ext, - min_max_norm_mel=self.config.data.min_max_norm_mel, - mel_min=self.config.data.mel_min, - mel_max=self.config.data.mel_max, - ) - dev_dataset = OneshotVcDataset( - meta_file=self.config.data.dev_fid_list, - vctk_ppg_dir=self.config.data.vctk_ppg_dir, - libri_ppg_dir=self.config.data.libri_ppg_dir, - vctk_f0_dir=self.config.data.vctk_f0_dir, - libri_f0_dir=self.config.data.libri_f0_dir, - vctk_wav_dir=self.config.data.vctk_wav_dir, - libri_wav_dir=self.config.data.libri_wav_dir, - vctk_spk_dvec_dir=self.config.data.vctk_spk_dvec_dir, - libri_spk_dvec_dir=self.config.data.libri_spk_dvec_dir, - ppg_file_ext=self.config.data.ppg_file_ext, - min_max_norm_mel=self.config.data.min_max_norm_mel, - mel_min=self.config.data.mel_min, - mel_max=self.config.data.mel_max, - ) - self.train_dataloader = DataLoader( - train_dataset, - num_workers=self.paras.njobs, - shuffle=True, - batch_size=self.config.hparas.batch_size, - pin_memory=False, - drop_last=True, - collate_fn=MultiSpkVcCollate(self.config.model.frames_per_step, - use_spk_dvec=True), - ) - self.dev_dataloader = DataLoader( - dev_dataset, - num_workers=self.paras.njobs, - shuffle=False, - batch_size=self.config.hparas.batch_size, - pin_memory=False, - drop_last=False, - collate_fn=MultiSpkVcCollate(self.config.model.frames_per_step, - use_spk_dvec=True), - ) - self.plot_dataloader = DataLoader( - dev_dataset, - num_workers=self.paras.njobs, - shuffle=False, - batch_size=1, - pin_memory=False, - drop_last=False, - collate_fn=MultiSpkVcCollate(self.config.model.frames_per_step, - use_spk_dvec=True, - give_uttids=True), - ) - msg = "Have prepared training set and dev set." - self.verbose(msg) - - def load_pretrained_params(self): - print("Load pretrained model from: ", self.config.data.pretrain_model_file) - ignore_layer_prefixes = ["speaker_embedding_table"] - pretrain_model_file = self.config.data.pretrain_model_file - pretrain_ckpt = torch.load( - pretrain_model_file, map_location=self.device - )["model"] - model_dict = self.model.state_dict() - print(self.model) - - # 1. filter out unnecessrary keys - for prefix in ignore_layer_prefixes: - pretrain_ckpt = {k : v - for k, v in pretrain_ckpt.items() if not k.startswith(prefix) - } - # 2. overwrite entries in the existing state dict - model_dict.update(pretrain_ckpt) - - # 3. load the new state dict - self.model.load_state_dict(model_dict) - - def set_model(self): - """Setup model and optimizer""" - # Model - print("[INFO] Model name: ", self.config["model_name"]) - self.model = MelDecoderMOLv2( - **self.config["model"] - ).to(self.device) - # self.load_pretrained_params() - - # model_params = [{'params': self.model.spk_embedding.weight}] - model_params = [{'params': self.model.parameters()}] - - # Loss criterion - self.loss_criterion = MaskedMSELoss(self.config.model.frames_per_step) - - # Optimizer - self.optimizer = Optimizer(model_params, **self.config["hparas"]) - self.verbose(self.optimizer.create_msg()) - - # Automatically load pre-trained model if self.paras.load is given - self.load_ckpt() - - def exec(self): - self.verbose("Total training steps {}.".format( - human_format(self.max_step))) - - mel_loss = None - n_epochs = 0 - # Set as current time - self.timer.set() - - while self.step < self.max_step: - for data in self.train_dataloader: - # Pre-step: updata lr_rate and do zero_grad - lr_rate = self.optimizer.pre_step(self.step) - total_loss = 0 - # data to device - ppgs, lf0_uvs, mels, in_lengths, \ - out_lengths, spk_ids, stop_tokens = self.fetch_data(data) - self.timer.cnt("rd") - mel_outputs, mel_outputs_postnet, predicted_stop = self.model( - ppgs, - in_lengths, - mels, - out_lengths, - lf0_uvs, - spk_ids - ) - mel_loss, stop_loss = self.loss_criterion( - mel_outputs, - mel_outputs_postnet, - mels, - out_lengths, - stop_tokens, - predicted_stop - ) - loss = mel_loss + stop_loss - - self.timer.cnt("fw") - - # Back-prop - grad_norm = self.backward(loss) - self.step += 1 - - # Logger - if (self.step == 1) or (self.step % self.PROGRESS_STEP == 0): - self.progress("Tr|loss:{:.4f},mel-loss:{:.4f},stop-loss:{:.4f}|Grad.Norm-{:.2f}|{}" - .format(loss.cpu().item(), mel_loss.cpu().item(), - stop_loss.cpu().item(), grad_norm, self.timer.show())) - self.write_log('loss', {'tr/loss': loss, - 'tr/mel-loss': mel_loss, - 'tr/stop-loss': stop_loss}) - - # Validation - if (self.step == 1) or (self.step % self.valid_step == 0): - self.validate() - - # End of step - # https://github.com/pytorch/pytorch/issues/13246#issuecomment-529185354 - torch.cuda.empty_cache() - self.timer.set() - if self.step > self.max_step: - break - n_epochs += 1 - self.log.close() - - def validate(self): - self.model.eval() - dev_loss, dev_mel_loss, dev_stop_loss = 0.0, 0.0, 0.0 - - for i, data in enumerate(self.dev_dataloader): - self.progress('Valid step - {}/{}'.format(i+1, len(self.dev_dataloader))) - # Fetch data - ppgs, lf0_uvs, mels, in_lengths, \ - out_lengths, spk_ids, stop_tokens = self.fetch_data(data) - with torch.no_grad(): - mel_outputs, mel_outputs_postnet, predicted_stop = self.model( - ppgs, - in_lengths, - mels, - out_lengths, - lf0_uvs, - spk_ids - ) - mel_loss, stop_loss = self.loss_criterion( - mel_outputs, - mel_outputs_postnet, - mels, - out_lengths, - stop_tokens, - predicted_stop - ) - loss = mel_loss + stop_loss - - dev_loss += loss.cpu().item() - dev_mel_loss += mel_loss.cpu().item() - dev_stop_loss += stop_loss.cpu().item() - - dev_loss = dev_loss / (i + 1) - dev_mel_loss = dev_mel_loss / (i + 1) - dev_stop_loss = dev_stop_loss / (i + 1) - self.save_checkpoint(f'step_{self.step}.pth', 'loss', dev_loss, show_msg=False) - if dev_loss < self.best_loss: - self.best_loss = dev_loss - self.save_checkpoint(f'best_loss_step_{self.step}.pth', 'loss', dev_loss) - self.write_log('loss', {'dv/loss': dev_loss, - 'dv/mel-loss': dev_mel_loss, - 'dv/stop-loss': dev_stop_loss}) - - # plot attention - for i, data in enumerate(self.plot_dataloader): - if i == self.num_att_plots: - break - # Fetch data - ppgs, lf0_uvs, mels, in_lengths, \ - out_lengths, spk_ids, stop_tokens = self.fetch_data(data[:-1]) - fid = data[-1][0] - with torch.no_grad(): - _, _, _, att_ws = self.model( - ppgs, - in_lengths, - mels, - out_lengths, - lf0_uvs, - spk_ids, - output_att_ws=True - ) - att_ws = att_ws.squeeze(0).cpu().numpy() - att_ws = att_ws[None] - w, h = plt.figaspect(1.0 / len(att_ws)) - fig = plt.Figure(figsize=(w * 1.3, h * 1.3)) - axes = fig.subplots(1, len(att_ws)) - if len(att_ws) == 1: - axes = [axes] - - for ax, aw in zip(axes, att_ws): - ax.imshow(aw.astype(np.float32), aspect="auto") - ax.set_title(f"{fid}") - ax.set_xlabel("Input") - ax.set_ylabel("Output") - ax.xaxis.set_major_locator(MaxNLocator(integer=True)) - ax.yaxis.set_major_locator(MaxNLocator(integer=True)) - fig_name = f"{self.att_ws_dir}/{fid}_step{self.step}.png" - fig.savefig(fig_name) - - # Resume training - self.model.train() - diff --git a/spaces/Kevin676/Raven-with-Voice-Cloning-2.0/README.md b/spaces/Kevin676/Raven-with-Voice-Cloning-2.0/README.md deleted file mode 100644 index 36b683023372ab38b211340f8d23e269c357d03b..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Raven-with-Voice-Cloning-2.0/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Raven with Voice Cloning-2.0 -emoji: ⚡ -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.25.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: Kevin676/Voice-Cloning ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer_preprocess_embeds.py b/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer_preprocess_embeds.py deleted file mode 100644 index 94f864d5d3c36c6177b211f5818e7c920a41cd8c..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer_preprocess_embeds.py +++ /dev/null @@ -1,25 +0,0 @@ -from synthesizer.preprocess import create_embeddings -from utils.argutils import print_args -from pathlib import Path -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Creates embeddings for the synthesizer from the LibriSpeech utterances.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("synthesizer_root", type=Path, help=\ - "Path to the synthesizer training data that contains the audios and the train.txt file. " - "If you let everything as default, it should be /SV2TTS/synthesizer/.") - parser.add_argument("-e", "--encoder_model_fpath", type=Path, - default="encoder/saved_models/pretrained.pt", help=\ - "Path your trained encoder model.") - parser.add_argument("-n", "--n_processes", type=int, default=4, help= \ - "Number of parallel processes. An encoder is created for each, so you may need to lower " - "this value on GPUs with low memory. Set it to 1 if CUDA is unhappy.") - args = parser.parse_args() - - # Preprocess the dataset - print_args(args, parser) - create_embeddings(**vars(args)) diff --git a/spaces/KevinQHLin/UniVTG/utils/temporal_nms.py b/spaces/KevinQHLin/UniVTG/utils/temporal_nms.py deleted file mode 100644 index 2844f5d4c1ac71760cd82c7aaf82c6b2daa9a207..0000000000000000000000000000000000000000 --- a/spaces/KevinQHLin/UniVTG/utils/temporal_nms.py +++ /dev/null @@ -1,74 +0,0 @@ -""" -Non-Maximum Suppression for video proposals. -""" - - -def compute_temporal_iou(pred, gt): - """ deprecated due to performance concerns - compute intersection-over-union along temporal axis - Args: - pred: [st (float), ed (float)] - gt: [st (float), ed (float)] - Returns: - iou (float): - - Ref: https://github.com/LisaAnne/LocalizingMoments/blob/master/utils/eval.py - """ - intersection = max(0, min(pred[1], gt[1]) - max(pred[0], gt[0])) - union = max(pred[1], gt[1]) - min(pred[0], gt[0]) # not the correct union though - if union == 0: - return 0 - else: - return 1.0 * intersection / union - - -def temporal_nms(predictions, nms_thd, max_after_nms=100): - """ - Args: - predictions: list(sublist), each sublist is [st (float), ed(float), score (float)], - note larger scores are better and are preserved. For metrics that are better when smaller, - please convert to its negative, e.g., convert distance to negative distance. - nms_thd: float in [0, 1] - max_after_nms: - Returns: - predictions_after_nms: list(sublist), each sublist is [st (float), ed(float), score (float)] - References: - https://github.com/wzmsltw/BSN-boundary-sensitive-network/blob/7b101fc5978802aa3c95ba5779eb54151c6173c6/Post_processing.py#L42 - """ - if len(predictions) == 1: # only has one prediction, no need for nms - return predictions - - predictions = sorted(predictions, key=lambda x: x[2], reverse=True) # descending order - - tstart = [e[0] for e in predictions] - tend = [e[1] for e in predictions] - tscore = [e[2] for e in predictions] - rstart = [] - rend = [] - rscore = [] - while len(tstart) > 1 and len(rscore) < max_after_nms: # max 100 after nms - idx = 1 - while idx < len(tstart): # compare with every prediction in the list. - if compute_temporal_iou([tstart[0], tend[0]], [tstart[idx], tend[idx]]) > nms_thd: - # rm highly overlapped lower score entries. - tstart.pop(idx) - tend.pop(idx) - tscore.pop(idx) - # print("--------------------------------") - # print(compute_temporal_iou([tstart[0], tend[0]], [tstart[idx], tend[idx]])) - # print([tstart[0], tend[0]], [tstart[idx], tend[idx]]) - # print(tstart.pop(idx), tend.pop(idx), tscore.pop(idx)) - else: - # move to next - idx += 1 - rstart.append(tstart.pop(0)) - rend.append(tend.pop(0)) - rscore.append(tscore.pop(0)) - - if len(rscore) < max_after_nms and len(tstart) >= 1: # add the last, possibly empty. - rstart.append(tstart.pop(0)) - rend.append(tend.pop(0)) - rscore.append(tscore.pop(0)) - - predictions_after_nms = [[st, ed, s] for s, st, ed in zip(rscore, rstart, rend)] - return predictions_after_nms diff --git a/spaces/Kimata/multimodal-deepfakes/app.py b/spaces/Kimata/multimodal-deepfakes/app.py deleted file mode 100644 index a368a28bd6778155cc44c11302e5f13b7d13fcd1..0000000000000000000000000000000000000000 --- a/spaces/Kimata/multimodal-deepfakes/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import gradio as gr -import pipeline - - -title="EfficientNetV2 Deepfakes Video Detector" -description="EfficientNetV2 Deepfakes Image Detector by using frame-by-frame detection." - - -video_interface = gr.Interface(pipeline.deepfakes_video_predict, - gr.Video(), - "text", - examples = ["videos/celeb_synthesis.mp4", "videos/real-1.mp4"], - cache_examples = False - ) - - -image_interface = gr.Interface(pipeline.deepfakes_image_predict, - gr.Image(), - "text", - examples = ["images/lady.jpg", "images/fake_image.jpg"], - cache_examples=False - ) - -audio_interface = gr.Interface(pipeline.deepfakes_audio_predict, - gr.Audio(), - "text", - examples = ["audios/DF_E_2000027.flac", "audios/DF_E_2000031.flac"], - cache_examples = False) - - -app = gr.TabbedInterface(interface_list= [image_interface, video_interface, audio_interface], - tab_names = ['Image inference', 'Video inference', 'Audio inference']) - -if __name__ == '__main__': - app.launch(share = False) \ No newline at end of file diff --git a/spaces/Kreaols/ChuanhuChatGPT/run_Linux.sh b/spaces/Kreaols/ChuanhuChatGPT/run_Linux.sh deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/run_Linux.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/KunamVishnu/MyGenAiChatBot/app.py b/spaces/KunamVishnu/MyGenAiChatBot/app.py deleted file mode 100644 index d4e8c682c1696197371641afad14c940b3d5ab15..0000000000000000000000000000000000000000 --- a/spaces/KunamVishnu/MyGenAiChatBot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a tech-savvy computer science student who spends countless hours coding, building apps, and keeping up with the latest tech trends. You enjoy discussing programming languages, AI, and gadgets and are always ready to troubleshoot tech-related problems. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_nwpu_config.py b/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_nwpu_config.py deleted file mode 100644 index ced400ede697aa4650b2743e8eb6e28fdb7df7ae..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_nwpu_config.py +++ /dev/null @@ -1,345 +0,0 @@ -custom_imports = dict(imports=['mmseg.datasets', 'mmseg.models'], allow_failed_imports=False) - -sub_model_train = [ - 'panoptic_head', - 'data_preprocessor' -] - -sub_model_optim = { - 'panoptic_head': {'lr_mult': 1}, -} - -max_epochs = 1200 - -optimizer = dict( - type='AdamW', - sub_model=sub_model_optim, - lr=0.0005, - weight_decay=1e-3 -) - -param_scheduler = [ - # warm up learning rate scheduler - dict( - type='LinearLR', - start_factor=1e-4, - by_epoch=True, - begin=0, - end=1, - # update by iter - convert_to_iter_based=True), - # main learning rate scheduler - dict( - type='CosineAnnealingLR', - T_max=max_epochs, - by_epoch=True, - begin=1, - end=max_epochs, - ), -] - -param_scheduler_callback = dict( - type='ParamSchedulerHook' -) - -evaluator_ = dict( - type='CocoPLMetric', - metric=['bbox', 'segm'], - proposal_nums=[1, 10, 100] -) - -evaluator = dict( - val_evaluator=evaluator_, -) - - -image_size = (1024, 1024) - -data_preprocessor = dict( - type='mmdet.DetDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True, - pad_size_divisor=32, - pad_mask=True, - mask_pad_value=0, -) - -num_things_classes = 10 -num_stuff_classes = 0 -num_classes = num_things_classes + num_stuff_classes -prompt_shape = (60, 4) - -model_cfg = dict( - type='SegSAMAnchorPLer', - hyperparameters=dict( - optimizer=optimizer, - param_scheduler=param_scheduler, - evaluator=evaluator, - ), - need_train_names=sub_model_train, - data_preprocessor=data_preprocessor, - backbone=dict( - type='vit_h', - checkpoint='pretrain/sam/sam_vit_h_4b8939.pth', - # type='vit_b', - # checkpoint='pretrain/sam/sam_vit_b_01ec64.pth', - ), - panoptic_head=dict( - type='SAMAnchorInstanceHead', - neck=dict( - type='SAMAggregatorNeck', - in_channels=[1280] * 32, - # in_channels=[768] * 12, - inner_channels=32, - selected_channels=range(8, 32, 2), - # selected_channels=range(4, 12, 2), - out_channels=256, - up_sample_scale=4, - ), - rpn_head=dict( - type='mmdet.RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='mmdet.AnchorGenerator', - scales=[2, 4, 8, 16, 32, 64], - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32]), - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='mmdet.SmoothL1Loss', loss_weight=1.0)), - roi_head=dict( - type='SAMAnchorPromptRoIHead', - bbox_roi_extractor=dict( - type='mmdet.SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[8, 16, 32]), - bbox_head=dict( - type='mmdet.Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=num_classes, - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='mmdet.SmoothL1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='mmdet.SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[8, 16, 32]), - mask_head=dict( - type='SAMPromptMaskHead', - per_query_point=prompt_shape[1], - with_sincos=True, - class_agnostic=True, - loss_mask=dict( - type='mmdet.CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=512, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=256, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=1024, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5) - ) - ) -) - - -task_name = 'nwpu_ins' -exp_name = 'E20230629_1' -logger = dict( - type='WandbLogger', - project=task_name, - group='sam-anchor', - name=exp_name -) - - -callbacks = [ - param_scheduler_callback, - dict( - type='ModelCheckpoint', - dirpath=f'results/{task_name}/{exp_name}/checkpoints', - save_last=True, - mode='max', - monitor='valsegm_map_0', - save_top_k=3, - filename='epoch_{epoch}-map_{valsegm_map_0:.4f}' - ), - dict( - type='LearningRateMonitor', - logging_interval='step' - ) -] - - -trainer_cfg = dict( - compiled_model=False, - accelerator="auto", - strategy="auto", - # strategy="ddp", - # strategy='ddp_find_unused_parameters_true', - # precision='32', - # precision='16-mixed', - devices=8, - default_root_dir=f'results/{task_name}/{exp_name}', - # default_root_dir='results/tmp', - max_epochs=max_epochs, - logger=logger, - callbacks=callbacks, - log_every_n_steps=5, - check_val_every_n_epoch=5, - benchmark=True, - # sync_batchnorm=True, - # fast_dev_run=True, - - # limit_train_batches=1, - # limit_val_batches=0, - # limit_test_batches=None, - # limit_predict_batches=None, - # overfit_batches=0.0, - - # val_check_interval=None, - # num_sanity_val_steps=0, - # enable_checkpointing=None, - # enable_progress_bar=None, - # enable_model_summary=None, - # accumulate_grad_batches=32, - # gradient_clip_val=15, - # gradient_clip_algorithm='norm', - # deterministic=None, - # inference_mode: bool=True, - use_distributed_sampler=True, - # profiler="simple", - # detect_anomaly=False, - # barebones=False, - # plugins=None, - # reload_dataloaders_every_n_epochs=0, -) - - -backend_args = None -train_pipeline = [ - dict(type='mmdet.LoadImageFromFile'), - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='mmdet.Resize', scale=image_size), - dict(type='mmdet.RandomFlip', prob=0.5), - dict(type='mmdet.PackDetInputs') -] - -test_pipeline = [ - dict(type='mmdet.LoadImageFromFile', backend_args=backend_args), - dict(type='mmdet.Resize', scale=image_size), - # If you don't have a gt annotation, delete the pipeline - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor')) -] - - -train_batch_size_per_gpu = 2 -train_num_workers = 2 -test_batch_size_per_gpu = 2 -test_num_workers = 2 -persistent_workers = True - -data_parent = '/mnt/search01/dataset/cky_data/NWPU10' -train_data_prefix = '' -val_data_prefix = '' -dataset_type = 'NWPUInsSegDataset' - -val_loader = dict( - batch_size=test_batch_size_per_gpu, - num_workers=test_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - ann_file='NWPU_instances_val.json', - data_prefix=dict(img_path='positive image set'), - test_mode=True, - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=test_pipeline, - backend_args=backend_args)) - -datamodule_cfg = dict( - type='PLDataModule', - train_loader=dict( - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - ann_file='NWPU_instances_train.json', - data_prefix=dict(img_path='positive image set'), - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=train_pipeline, - backend_args=backend_args) - ), - val_loader=val_loader, - # test_loader=val_loader - predict_loader=val_loader -) \ No newline at end of file diff --git a/spaces/MMMMQZ/MQZGPT/modules/presets.py b/spaces/MMMMQZ/MQZGPT/modules/presets.py deleted file mode 100644 index 73a5ba4b7e213cbb6a3365ee9114757c0e4181b9..0000000000000000000000000000000000000000 --- a/spaces/MMMMQZ/MQZGPT/modules/presets.py +++ /dev/null @@ -1,222 +0,0 @@ -# -*- coding:utf-8 -*- -import os -from pathlib import Path -import gradio as gr -from .webui_locale import I18nAuto - -i18n = I18nAuto() # internationalization - -CHATGLM_MODEL = None -CHATGLM_TOKENIZER = None -LLAMA_MODEL = None -LLAMA_INFERENCER = None - -# ChatGPT 设置 -INITIAL_SYSTEM_PROMPT = "You are a helpful assistant." -API_HOST = "api.openai.com" -COMPLETION_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = Path("history") -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -STANDARD_ERROR_MSG = i18n("☹️发生了错误:") # 错误信息的标准前缀 -GENERAL_ERROR_MSG = i18n("获取对话时发生错误,请查看后台日志") -ERROR_RETRIEVE_MSG = i18n("请检查网络连接,或者API-Key是否有效。") -CONNECTION_TIMEOUT_MSG = i18n("连接超时,无法获取对话。") # 连接超时 -READ_TIMEOUT_MSG = i18n("读取超时,无法获取对话。") # 读取超时 -PROXY_ERROR_MSG = i18n("代理错误,无法获取对话。") # 代理错误 -SSL_ERROR_PROMPT = i18n("SSL错误,无法获取对话。") # SSL 错误 -NO_APIKEY_MSG = i18n("API key为空,请检查是否输入正确。") # API key 长度不足 51 位 -NO_INPUT_MSG = i18n("请输入对话内容。") # 未输入对话内容 -BILLING_NOT_APPLICABLE_MSG = i18n("账单信息不适用") # 本地运行的模型返回的账单信息 - -TIMEOUT_STREAMING = 60 # 流式对话时的超时时间 -TIMEOUT_ALL = 200 # 非流式对话时的超时时间 -ENABLE_STREAMING_OPTION = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -CHUANHU_TITLE = i18n("MQZChat 🚀") - -CHUANHU_DESCRIPTION = i18n("这一生,遇不到彼此最寂寞,遇到了,还是寂寞") - -FOOTER = """
    {versions}
    """ - -APPEARANCE_SWITCHER = """ -
    -"""+ i18n("切换亮暗色主题") + """ - -
    -""" - -SUMMARIZE_PROMPT = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -ONLINE_MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", - "xmchat", -] - -LOCAL_MODELS = [ - "chatglm-6b", - "chatglm-6b-int4", - "chatglm-6b-int4-qe", - "llama-7b-hf", - "llama-13b-hf", - "llama-30b-hf", - "llama-65b-hf" -] - -if os.environ.get('HIDE_LOCAL_MODELS', 'false') == 'true': - MODELS = ONLINE_MODELS -else: - MODELS = ONLINE_MODELS + LOCAL_MODELS - -DEFAULT_MODEL = 0 - -os.makedirs("models", exist_ok=True) -os.makedirs("lora", exist_ok=True) -os.makedirs("history", exist_ok=True) -for dir_name in os.listdir("models"): - if os.path.isdir(os.path.join("models", dir_name)): - if dir_name not in MODELS: - MODELS.append(dir_name) - -MODEL_TOKEN_LIMIT = { - "gpt-3.5-turbo": 4096, - "gpt-3.5-turbo-0301": 4096, - "gpt-4": 8192, - "gpt-4-0314": 8192, - "gpt-4-32k": 32768, - "gpt-4-32k-0314": 32768 -} - -TOKEN_OFFSET = 1000 # 模型的token上限减去这个值,得到软上限。到达软上限之后,自动尝试减少token占用。 -DEFAULT_TOKEN_LIMIT = 3000 # 默认的token上限 -REDUCE_TOKEN_FACTOR = 0.5 # 与模型token上限想乘,得到目标token数。减少token占用时,将token占用减少到目标token数以下。 - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/MMMMQZ/MQZGPT/run_Linux.sh b/spaces/MMMMQZ/MQZGPT/run_Linux.sh deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/MMMMQZ/MQZGPT/run_Linux.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/MUmairAB/MaskedLM_App/app.py b/spaces/MUmairAB/MaskedLM_App/app.py deleted file mode 100644 index 211507de72c626c4030345aa532a97a98da68078..0000000000000000000000000000000000000000 --- a/spaces/MUmairAB/MaskedLM_App/app.py +++ /dev/null @@ -1,51 +0,0 @@ -# import the module -import streamlit as st -from transformers import pipeline -#Import the model -model = pipeline(task="fill-mask", - model="MUmairAB/bert-based-MaskedLM") - -#Typically, the model should be imported within a function. However, in this case, we are downloading it outside the function to avoid a significant delay that could annoy the user when downloading it inside the main function. By loading the model at this point, it will be downloaded when the app runs, and the user will overlook this initial loading time, as opposed to experiencing a delay after entering the input. - - - -#This function accepts the masked text like: "How are [MASK]" -# and feeds this text to the model and prints the output in which [MASK] is filled with the appropriate word. -def print_the_mask(text): - - #Apply the model - model_out = model(text) - - #First sort the list of dictionaries according to the score - model_out = sorted(model_out, key=lambda x: x['score'],reverse=True) - for sub_dict in model_out: - st.success(sub_dict["sequence"]) - - -#The main function that will be executed when this file is executed -def main(): - # Set the title - st.title("Masked Language Model App") - st.write("Created by: [Umair Akram](https://www.linkedin.com/in/m-umair01/)") - - h1 = "This App uses a fine-tuned DistilBERT-Base-Uncased Masked Language Model to predict the missed word in a sentence." - st.subheader(h1) - - st.write("Its code and other interesting projects are available on my [website](https://mumairab.github.io/)") - h2 = "Enter your text and put \"[MASK]\" at the word which you want to predict, as shown in the following example: Can we [MASK] to Paris?" - st.write(h2) - - text = st.text_input(label="Enter your text here:", - value="Type here ...") - - if(st.button('Submit')): - # Perform the input validation - if "[MASK]" not in text: - st.write("You did not enter \"[MASK]\" in the text. Please write your text again!") - else: - print_the_mask(text) - -#Call the main function -if __name__ == "__main__": - #Launch the Gradio interface - main() \ No newline at end of file diff --git a/spaces/MVV/3dTopDenoising/models/SAP/utils.py b/spaces/MVV/3dTopDenoising/models/SAP/utils.py deleted file mode 100644 index 437b9ea45fb227bde147d2a99fea88e2eb55282a..0000000000000000000000000000000000000000 --- a/spaces/MVV/3dTopDenoising/models/SAP/utils.py +++ /dev/null @@ -1,526 +0,0 @@ -import torch -import io, os, logging, urllib -import yaml -import trimesh -import imageio -import numbers -import math -import numpy as np -from collections import OrderedDict -from plyfile import PlyData -from torch import nn -from torch.nn import functional as F -from torch.utils import model_zoo -from skimage import measure, img_as_float32 -from igl import adjacency_matrix, connected_components - -################################################## -# Below are functions for DPSR - -def fftfreqs(res, dtype=torch.float32, exact=True): - """ - Helper function to return frequency tensors - :param res: n_dims int tuple of number of frequency modes - :return: - """ - - n_dims = len(res) - freqs = [] - for dim in range(n_dims - 1): - r_ = res[dim] - freq = np.fft.fftfreq(r_, d=1/r_) - freqs.append(torch.tensor(freq, dtype=dtype)) - r_ = res[-1] - if exact: - freqs.append(torch.tensor(np.fft.rfftfreq(r_, d=1/r_), dtype=dtype)) - else: - freqs.append(torch.tensor(np.fft.rfftfreq(r_, d=1/r_)[:-1], dtype=dtype)) - omega = torch.meshgrid(freqs) - omega = list(omega) - omega = torch.stack(omega, dim=-1) - - return omega - -def img(x, deg=1): # imaginary of tensor (assume last dim: real/imag) - """ - multiply tensor x by i ** deg - """ - deg %= 4 - if deg == 0: - res = x - elif deg == 1: - res = x[..., [1, 0]] - res[..., 0] = -res[..., 0] - elif deg == 2: - res = -x - elif deg == 3: - res = x[..., [1, 0]] - res[..., 1] = -res[..., 1] - return res - -def spec_gaussian_filter(res, sig): - omega = fftfreqs(res, dtype=torch.float64) # [dim0, dim1, dim2, d] - dis = torch.sqrt(torch.sum(omega ** 2, dim=-1)) - filter_ = torch.exp(-0.5*((sig*2*dis/res[0])**2)).unsqueeze(-1).unsqueeze(-1) - filter_.requires_grad = False - - return filter_ - -def grid_interp(grid, pts, batched=True): - """ - :param grid: tensor of shape (batch, *size, in_features) - :param pts: tensor of shape (batch, num_points, dim) within range (0, 1) - :return values at query points - """ - if not batched: - grid = grid.unsqueeze(0) - pts = pts.unsqueeze(0) - dim = pts.shape[-1] - bs = grid.shape[0] - size = torch.tensor(grid.shape[1:-1]).to(grid.device).type(pts.dtype) - cubesize = 1.0 / size - - ind0 = torch.floor(pts / cubesize).long() # (batch, num_points, dim) - ind1 = torch.fmod(torch.ceil(pts / cubesize), size).long() # periodic wrap-around - ind01 = torch.stack((ind0, ind1), dim=0) # (2, batch, num_points, dim) - tmp = torch.tensor([0,1],dtype=torch.long) - com_ = torch.stack(torch.meshgrid(tuple([tmp] * dim)), dim=-1).view(-1, dim) - dim_ = torch.arange(dim).repeat(com_.shape[0], 1) # (2**dim, dim) - ind_ = ind01[com_, ..., dim_] # (2**dim, dim, batch, num_points) - ind_n = ind_.permute(2, 3, 0, 1) # (batch, num_points, 2**dim, dim) - ind_b = torch.arange(bs).expand(ind_n.shape[1], ind_n.shape[2], bs).permute(2, 0, 1) # (batch, num_points, 2**dim) - # latent code on neighbor nodes - if dim == 2: - lat = grid.clone()[ind_b, ind_n[..., 0], ind_n[..., 1]] # (batch, num_points, 2**dim, in_features) - else: - lat = grid.clone()[ind_b, ind_n[..., 0], ind_n[..., 1], ind_n[..., 2]] # (batch, num_points, 2**dim, in_features) - - # weights of neighboring nodes - xyz0 = ind0.type(cubesize.dtype) * cubesize # (batch, num_points, dim) - xyz1 = (ind0.type(cubesize.dtype) + 1) * cubesize # (batch, num_points, dim) - xyz01 = torch.stack((xyz0, xyz1), dim=0) # (2, batch, num_points, dim) - pos = xyz01[com_, ..., dim_].permute(2,3,0,1) # (batch, num_points, 2**dim, dim) - pos_ = xyz01[1-com_, ..., dim_].permute(2,3,0,1) # (batch, num_points, 2**dim, dim) - pos_ = pos_.type(pts.dtype) - dxyz_ = torch.abs(pts.unsqueeze(-2) - pos_) / cubesize # (batch, num_points, 2**dim, dim) - weights = torch.prod(dxyz_, dim=-1, keepdim=False) # (batch, num_points, 2**dim) - query_values = torch.sum(lat * weights.unsqueeze(-1), dim=-2) # (batch, num_points, in_features) - if not batched: - query_values = query_values.squeeze(0) - - return query_values - -def scatter_to_grid(inds, vals, size): - """ - Scatter update values into empty tensor of size size. - :param inds: (#values, dims) - :param vals: (#values) - :param size: tuple for size. len(size)=dims - """ - dims = inds.shape[1] - assert(inds.shape[0] == vals.shape[0]) - assert(len(size) == dims) - dev = vals.device - # result = torch.zeros(*size).view(-1).to(dev).type(vals.dtype) # flatten - # # flatten inds - result = torch.zeros(*size, device=dev).view(-1).type(vals.dtype) # flatten - # flatten inds - fac = [np.prod(size[i+1:]) for i in range(len(size)-1)] + [1] - fac = torch.tensor(fac, device=dev).type(inds.dtype) - inds_fold = torch.sum(inds*fac, dim=-1) # [#values,] - result.scatter_add_(0, inds_fold, vals) - result = result.view(*size) - return result - -def point_rasterize(pts, vals, size): - """ - :param pts: point coords, tensor of shape (batch, num_points, dim) within range (0, 1) - :param vals: point values, tensor of shape (batch, num_points, features) - :param size: len(size)=dim tuple for grid size - :return rasterized values (batch, features, res0, res1, res2) - """ - dim = pts.shape[-1] - assert(pts.shape[:2] == vals.shape[:2]) - assert(pts.shape[2] == dim) - size_list = list(size) - size = torch.tensor(size).to(pts.device).float() - cubesize = 1.0 / size - bs = pts.shape[0] - nf = vals.shape[-1] - npts = pts.shape[1] - dev = pts.device - - ind0 = torch.floor(pts / cubesize).long() # (batch, num_points, dim) - ind1 = torch.fmod(torch.ceil(pts / cubesize), size).long() # periodic wrap-around - ind01 = torch.stack((ind0, ind1), dim=0) # (2, batch, num_points, dim) - tmp = torch.tensor([0,1],dtype=torch.long) - com_ = torch.stack(torch.meshgrid(tuple([tmp] * dim)), dim=-1).view(-1, dim) - dim_ = torch.arange(dim).repeat(com_.shape[0], 1) # (2**dim, dim) - ind_ = ind01[com_, ..., dim_] # (2**dim, dim, batch, num_points) - ind_n = ind_.permute(2, 3, 0, 1) # (batch, num_points, 2**dim, dim) - # ind_b = torch.arange(bs).expand(ind_n.shape[1], ind_n.shape[2], bs).permute(2, 0, 1) # (batch, num_points, 2**dim) - ind_b = torch.arange(bs, device=dev).expand(ind_n.shape[1], ind_n.shape[2], bs).permute(2, 0, 1) # (batch, num_points, 2**dim) - - # weights of neighboring nodes - xyz0 = ind0.type(cubesize.dtype) * cubesize # (batch, num_points, dim) - xyz1 = (ind0.type(cubesize.dtype) + 1) * cubesize # (batch, num_points, dim) - xyz01 = torch.stack((xyz0, xyz1), dim=0) # (2, batch, num_points, dim) - pos = xyz01[com_, ..., dim_].permute(2,3,0,1) # (batch, num_points, 2**dim, dim) - pos_ = xyz01[1-com_, ..., dim_].permute(2,3,0,1) # (batch, num_points, 2**dim, dim) - pos_ = pos_.type(pts.dtype) - dxyz_ = torch.abs(pts.unsqueeze(-2) - pos_) / cubesize # (batch, num_points, 2**dim, dim) - weights = torch.prod(dxyz_, dim=-1, keepdim=False) # (batch, num_points, 2**dim) - - ind_b = ind_b.unsqueeze(-1).unsqueeze(-1) # (batch, num_points, 2**dim, 1, 1) - ind_n = ind_n.unsqueeze(-2) # (batch, num_points, 2**dim, 1, dim) - ind_f = torch.arange(nf, device=dev).view(1, 1, 1, nf, 1) # (1, 1, 1, nf, 1) - # ind_f = torch.arange(nf).view(1, 1, 1, nf, 1) # (1, 1, 1, nf, 1) - - ind_b = ind_b.expand(bs, npts, 2**dim, nf, 1) - ind_n = ind_n.expand(bs, npts, 2**dim, nf, dim).to(dev) - ind_f = ind_f.expand(bs, npts, 2**dim, nf, 1) - inds = torch.cat([ind_b, ind_f, ind_n], dim=-1) # (batch, num_points, 2**dim, nf, 1+1+dim) - - # weighted values - vals = weights.unsqueeze(-1) * vals.unsqueeze(-2) # (batch, num_points, 2**dim, nf) - - inds = inds.view(-1, dim+2).permute(1, 0).long() # (1+dim+1, bs*npts*2**dim*nf) - vals = vals.reshape(-1) # (bs*npts*2**dim*nf) - tensor_size = [bs, nf] + size_list - raster = scatter_to_grid(inds.permute(1, 0), vals, [bs, nf] + size_list) - - return raster # [batch, nf, res, res, res] - - - -################################################## -# Below are the utilization functions in general - -class AverageMeter(object): - """Computes and stores the average and current value""" - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.n = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.n = n - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - @property - def valcavg(self): - return self.val.sum().item() / (self.n != 0).sum().item() - - @property - def avgcavg(self): - return self.avg.sum().item() / (self.count != 0).sum().item() - -def load_model_manual(state_dict, model): - new_state_dict = OrderedDict() - is_model_parallel = isinstance(model, torch.nn.DataParallel) - for k, v in state_dict.items(): - if k.startswith('module.') != is_model_parallel: - if k.startswith('module.'): - # remove module - k = k[7:] - else: - # add module - k = 'module.' + k - - new_state_dict[k]=v - - model.load_state_dict(new_state_dict) - -def mc_from_psr(psr_grid, pytorchify=False, real_scale=False, zero_level=0): - ''' - Run marching cubes from PSR grid - ''' - batch_size = psr_grid.shape[0] - s = psr_grid.shape[-1] # size of psr_grid - psr_grid_numpy = psr_grid.squeeze().detach().cpu().numpy() - - if batch_size>1: - verts, faces, normals = [], [], [] - for i in range(batch_size): - verts_cur, faces_cur, normals_cur, values = measure.marching_cubes(psr_grid_numpy[i], level=0) - verts.append(verts_cur) - faces.append(faces_cur) - normals.append(normals_cur) - verts = np.stack(verts, axis = 0) - faces = np.stack(faces, axis = 0) - normals = np.stack(normals, axis = 0) - else: - try: - verts, faces, normals, values = measure.marching_cubes(psr_grid_numpy, level=zero_level) - except: - verts, faces, normals, values = measure.marching_cubes(psr_grid_numpy) - if real_scale: - verts = verts / (s-1) # scale to range [0, 1] - else: - verts = verts / s # scale to range [0, 1) - - if pytorchify: - device = psr_grid.device - verts = torch.Tensor(np.ascontiguousarray(verts)).to(device) - faces = torch.Tensor(np.ascontiguousarray(faces)).to(device) - normals = torch.Tensor(np.ascontiguousarray(-normals)).to(device) - - return verts, faces, normals - -def calc_inters_points(verts, faces, pose, img_size, mask_gt=None): - verts = verts.squeeze() - faces = faces.squeeze() - pix_to_face, w, mask = mesh_rasterization(verts, faces, pose, img_size) - if mask_gt is not None: - #! only evaluate within the intersection - mask = mask & mask_gt - # find 3D points intesected on the mesh - if True: - w_masked = w[mask] - f_p = faces[pix_to_face[mask]].long() # cooresponding faces for each pixel - # corresponding vertices for p_closest - v_a, v_b, v_c = verts[f_p[..., 0]], verts[f_p[..., 1]], verts[f_p[..., 2]] - - # calculate the intersection point of each pixel and the mesh - p_inters = w_masked[..., 0, None] * v_a + \ - w_masked[..., 1, None] * v_b + \ - w_masked[..., 2, None] * v_c - else: - # backproject ndc to world coordinates using z-buffer - W, H = img_size[1], img_size[0] - xy = uv.to(mask.device)[mask] - x_ndc = 1 - (2*xy[:, 0]) / (W - 1) - y_ndc = 1 - (2*xy[:, 1]) / (H - 1) - z = zbuf.squeeze().reshape(H * W)[mask] - xy_depth = torch.stack((x_ndc, y_ndc, z), dim=1) - - p_inters = pose.unproject_points(xy_depth, world_coordinates=True) - - # if there are outlier points, we should remove it - if (p_inters.max()>1) | (p_inters.min()<-1): - mask_bound = (p_inters>=-1) & (p_inters<=1) - mask_bound = (mask_bound.sum(dim=-1)==3) - mask[mask==True] = mask_bound - p_inters = p_inters[mask_bound] - print('!!!!!find outlier!') - - return p_inters, mask, f_p, w_masked - -def mesh_rasterization(verts, faces, pose, img_size): - ''' - Use PyTorch3D to rasterize the mesh given a camera - ''' - transformed_v = pose.transform_points(verts.detach()) # world -> ndc coordinate system - if isinstance(pose, PerspectiveCameras): - transformed_v[..., 2] = 1/transformed_v[..., 2] - # find p_closest on mesh of each pixel via rasterization - transformed_mesh = Meshes(verts=[transformed_v], faces=[faces]) - pix_to_face, zbuf, bary_coords, dists = rasterize_meshes( - transformed_mesh, - image_size=img_size, - blur_radius=0, - faces_per_pixel=1, - perspective_correct=False - ) - pix_to_face = pix_to_face.reshape(1, -1) # B x reso x reso -> B x (reso x reso) - mask = pix_to_face.clone() != -1 - mask = mask.squeeze() - pix_to_face = pix_to_face.squeeze() - w = bary_coords.reshape(-1, 3) - - return pix_to_face, w, mask - -def verts_on_largest_mesh(verts, faces): - ''' - verts: Numpy array or Torch.Tensor (N, 3) - faces: Numpy array (N, 3) - ''' - if torch.is_tensor(faces): - verts = verts.squeeze().detach().cpu().numpy() - faces = faces.squeeze().int().detach().cpu().numpy() - - A = adjacency_matrix(faces) - num, conn_idx, conn_size = connected_components(A) - if num == 0: - v_large, f_large = verts, faces - else: - max_idx = conn_size.argmax() # find the index of the largest component - v_large = verts[conn_idx==max_idx] # keep points on the largest component - - if True: - mesh_largest = trimesh.Trimesh(verts, faces) - connected_comp = mesh_largest.split(only_watertight=False) - mesh_largest = connected_comp[max_idx] - v_large, f_large = mesh_largest.vertices, mesh_largest.faces - v_large = v_large.astype(np.float32) - return v_large, f_large - -def update_recursive(dict1, dict2): - ''' Update two config dictionaries recursively. - - Args: - dict1 (dict): first dictionary to be updated - dict2 (dict): second dictionary which entries should be used - - ''' - for k, v in dict2.items(): - if k not in dict1: - dict1[k] = dict() - if isinstance(v, dict): - update_recursive(dict1[k], v) - else: - dict1[k] = v - -def scale2onet(p, scale=1.2): - ''' - Scale the point cloud from SAP to ONet range - ''' - return (p - 0.5) * scale - -def update_optimizer(inputs, cfg, epoch, model=None, schedule=None): - if model is not None: - if schedule is not None: - optimizer = torch.optim.Adam([ - {"params": model.parameters(), - "lr": schedule[0].get_learning_rate(epoch)}, - {"params": inputs, - "lr": schedule[1].get_learning_rate(epoch)}]) - elif 'lr' in cfg['train']: - optimizer = torch.optim.Adam([ - {"params": model.parameters(), - "lr": float(cfg['train']['lr'])}, - {"params": inputs, - "lr": float(cfg['train']['lr_pcl'])}]) - else: - raise Exception('no known learning rate') - else: - if schedule is not None: - optimizer = torch.optim.Adam([inputs], lr=schedule[0].get_learning_rate(epoch)) - else: - optimizer = torch.optim.Adam([inputs], lr=float(cfg['train']['lr_pcl'])) - - return optimizer - - -def is_url(url): - scheme = urllib.parse.urlparse(url).scheme - return scheme in ('http', 'https') - -def load_url(url): - '''Load a module dictionary from url. - - Args: - url (str): url to saved model - ''' - print(url) - print('=> Loading checkpoint from url...') - state_dict = model_zoo.load_url(url, progress=True) - - return state_dict - - -class GaussianSmoothing(nn.Module): - """ - Apply gaussian smoothing on a - 1d, 2d or 3d tensor. Filtering is performed seperately for each channel - in the input using a depthwise convolution. - Arguments: - channels (int, sequence): Number of channels of the input tensors. Output will have this number of channels as well. - kernel_size (int, sequence): Size of the gaussian kernel. - sigma (float, sequence): Standard deviation of the gaussian kernel. - dim (int, optional): The number of dimensions of the data. - Default value is 2 (spatial). - """ - def __init__(self, channels, kernel_size, sigma, dim=3): - super(GaussianSmoothing, self).__init__() - if isinstance(kernel_size, numbers.Number): - kernel_size = [kernel_size] * dim - if isinstance(sigma, numbers.Number): - sigma = [sigma] * dim - - # The gaussian kernel is the product of the - # gaussian function of each dimension. - kernel = 1 - meshgrids = torch.meshgrid( - [ - torch.arange(size, dtype=torch.float32) - for size in kernel_size - ] - ) - for size, std, mgrid in zip(kernel_size, sigma, meshgrids): - mean = (size - 1) / 2 - kernel *= 1 / (std * math.sqrt(2 * math.pi)) * \ - torch.exp(-((mgrid - mean) / std) ** 2 / 2) - - # Make sure sum of values in gaussian kernel equals 1. - kernel = kernel / torch.sum(kernel) - - # Reshape to depthwise convolutional weight - kernel = kernel.view(1, 1, *kernel.size()) - kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1)) - - self.register_buffer('weight', kernel) - self.groups = channels - - if dim == 1: - self.conv = F.conv1d - elif dim == 2: - self.conv = F.conv2d - elif dim == 3: - self.conv = F.conv3d - else: - raise RuntimeError( - 'Only 1, 2 and 3 dimensions are supported. Received {}.'.format(dim) - ) - - def forward(self, input): - """ - Apply gaussian filter to input. - Arguments: - input (torch.Tensor): Input to apply gaussian filter on. - Returns: - filtered (torch.Tensor): Filtered output. - """ - return self.conv(input, weight=self.weight, groups=self.groups) - -# Originally from https://github.com/amosgropp/IGR/blob/0db06b1273/code/utils/general.py -def get_learning_rate_schedules(schedule_specs): - - schedules = [] - - for key in schedule_specs.keys(): - schedules.append(StepLearningRateSchedule( - schedule_specs[key]['initial'], - schedule_specs[key]["interval"], - schedule_specs[key]["factor"], - schedule_specs[key]["final"])) - return schedules - -class LearningRateSchedule: - def get_learning_rate(self, epoch): - pass -class StepLearningRateSchedule(LearningRateSchedule): - def __init__(self, initial, interval, factor, final=1e-6): - self.initial = float(initial) - self.interval = interval - self.factor = factor - self.final = float(final) - - def get_learning_rate(self, epoch): - lr = np.maximum(self.initial * (self.factor ** (epoch // self.interval)), 5.0e-6) - if lr > self.final: - return lr - else: - return self.final - -def adjust_learning_rate(lr_schedules, optimizer, epoch): - for i, param_group in enumerate(optimizer.param_groups): - param_group["lr"] = lr_schedules[i].get_learning_rate(epoch) \ No newline at end of file diff --git a/spaces/Mahiruoshi/MyGO_VIts-bert/data_utils.py b/spaces/Mahiruoshi/MyGO_VIts-bert/data_utils.py deleted file mode 100644 index 5bf1132b3b2f88e4c645816b3bacfc1bd87474be..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/MyGO_VIts-bert/data_utils.py +++ /dev/null @@ -1,406 +0,0 @@ -import os -import random -import torch -import torch.utils.data -from tqdm import tqdm -from loguru import logger -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr( - hparams, "use_mel_posterior_encoder", False - ) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - logger.info("Init dataset...") - for _id, spk, language, text, phones, tone, word2ph in tqdm( - self.audiopaths_sid_text - ): - audiopath = f"{_id}" - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append( - [audiopath, spk, language, text, phones, tone, word2ph] - ) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - logger.info( - "skipped: " - + str(skipped) - + ", total: " - + str(len(self.audiopaths_sid_text)) - ) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, ja_bert, phones, tone, language = self.get_text( - text, word2ph, phones, tone, language, audiopath - ) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert, ja_bert) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} {} SR doesn't match target {} SR".format( - filename, sampling_rate, self.sampling_rate - ) - ) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - try: - spec = torch.load(spec_filename) - except: - if self.use_mel_spec_posterior: - spec = mel_spectrogram_torch( - audio_norm, - self.filter_length, - self.n_mel_channels, - self.sampling_rate, - self.hop_length, - self.win_length, - self.hparams.mel_fmin, - self.hparams.mel_fmax, - center=False, - ) - else: - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - if self.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - assert bert.shape[-1] == len(phone), phone - - if language_str == "ZH": - bert = bert - ja_bert = torch.zeros(768, len(phone)) - elif language_str == "JA": - ja_bert = bert - bert = torch.zeros(1024, len(phone)) - else: - bert = torch.zeros(1024, len(phone)) - ja_bert = torch.zeros(768, len(phone)) - assert bert.shape[-1] == len(phone), ( - bert.shape, - len(phone), - sum(word2ph), - p1, - p2, - t1, - t2, - pold, - pold2, - word2ph, - text, - w2pho, - ) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, ja_bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate: - """Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), dim=0, descending=True - ) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - ja_bert_padded = torch.FloatTensor(len(batch), 768, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - ja_bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, : text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, : spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, : wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, : tone.size(0)] = tone - - language = row[5] - language_padded[i, : language.size(0)] = language - - bert = row[6] - bert_padded[i, :, : bert.size(1)] = bert - - ja_bert = row[7] - ja_bert_padded[i, :, : ja_bert.size(1)] = ja_bert - - return ( - text_padded, - text_lengths, - spec_padded, - spec_lengths, - wav_padded, - wav_lengths, - sid, - tone_padded, - language_padded, - bert_padded, - ja_bert_padded, - ) - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__( - self, - dataset, - batch_size, - boundaries, - num_replicas=None, - rank=None, - shuffle=True, - ): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - try: - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - assert all(len(bucket) > 0 for bucket in buckets) - # When one bucket is not traversed - except Exception as e: - print("Bucket warning ", e) - for i in range(len(buckets) - 1, -1, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = ( - total_batch_size - (len_bucket % total_batch_size) - ) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if len_bucket == 0: - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ( - ids_bucket - + ids_bucket * (rem // len_bucket) - + ids_bucket[: (rem % len_bucket)] - ) - - # subsample - ids_bucket = ids_bucket[self.rank :: self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [ - bucket[idx] - for idx in ids_bucket[ - j * self.batch_size : (j + 1) * self.batch_size - ] - ] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/MathysL/AutoGPT4/data_ingestion.py b/spaces/MathysL/AutoGPT4/data_ingestion.py deleted file mode 100644 index b89a33dafd15c2e7bded0445a741a4a1c47ed417..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/data_ingestion.py +++ /dev/null @@ -1,96 +0,0 @@ -import argparse -import logging - -from autogpt.commands.file_operations import ingest_file, search_files -from autogpt.config import Config -from autogpt.memory import get_memory - -cfg = Config() - - -def configure_logging(): - logging.basicConfig( - filename="log-ingestion.txt", - filemode="a", - format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s", - datefmt="%H:%M:%S", - level=logging.DEBUG, - ) - return logging.getLogger("AutoGPT-Ingestion") - - -def ingest_directory(directory, memory, args): - """ - Ingest all files in a directory by calling the ingest_file function for each file. - - :param directory: The directory containing the files to ingest - :param memory: An object with an add() method to store the chunks in memory - """ - try: - files = search_files(directory) - for file in files: - ingest_file(file, memory, args.max_length, args.overlap) - except Exception as e: - print(f"Error while ingesting directory '{directory}': {str(e)}") - - -def main() -> None: - logger = configure_logging() - - parser = argparse.ArgumentParser( - description="Ingest a file or a directory with multiple files into memory. " - "Make sure to set your .env before running this script." - ) - group = parser.add_mutually_exclusive_group(required=True) - group.add_argument("--file", type=str, help="The file to ingest.") - group.add_argument( - "--dir", type=str, help="The directory containing the files to ingest." - ) - parser.add_argument( - "--init", - action="store_true", - help="Init the memory and wipe its content (default: False)", - default=False, - ) - parser.add_argument( - "--overlap", - type=int, - help="The overlap size between chunks when ingesting files (default: 200)", - default=200, - ) - parser.add_argument( - "--max_length", - type=int, - help="The max_length of each chunk when ingesting files (default: 4000)", - default=4000, - ) - - args = parser.parse_args() - - # Initialize memory - memory = get_memory(cfg, init=args.init) - print("Using memory of type: " + memory.__class__.__name__) - - if args.file: - try: - ingest_file(args.file, memory, args.max_length, args.overlap) - print(f"File '{args.file}' ingested successfully.") - except Exception as e: - logger.error(f"Error while ingesting file '{args.file}': {str(e)}") - print(f"Error while ingesting file '{args.file}': {str(e)}") - elif args.dir: - try: - ingest_directory(args.dir, memory, args) - print(f"Directory '{args.dir}' ingested successfully.") - except Exception as e: - logger.error(f"Error while ingesting directory '{args.dir}': {str(e)}") - print(f"Error while ingesting directory '{args.dir}': {str(e)}") - else: - print( - "Please provide either a file path (--file) or a directory name (--dir)" - " inside the auto_gpt_workspace directory as input." - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/custom_build_augmentation.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/custom_build_augmentation.py deleted file mode 100644 index 9642c15e582fc953ecaa378a325b4fa02f4e7d28..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/custom_build_augmentation.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -import pycocotools.mask as mask_util -import torch -from fvcore.common.file_io import PathManager -from PIL import Image - - -from detectron2.data import transforms as T -from .transforms.custom_augmentation_impl import EfficientDetResizeCrop - -def build_custom_augmentation(cfg, is_train, scale=None, size=None, \ - min_size=None, max_size=None): - """ - Create a list of default :class:`Augmentation` from config. - Now it includes resizing and flipping. - - Returns: - list[Augmentation] - """ - if cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge': - if is_train: - min_size = cfg.INPUT.MIN_SIZE_TRAIN if min_size is None else min_size - max_size = cfg.INPUT.MAX_SIZE_TRAIN if max_size is None else max_size - sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING - else: - min_size = cfg.INPUT.MIN_SIZE_TEST - max_size = cfg.INPUT.MAX_SIZE_TEST - sample_style = "choice" - augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)] - elif cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop': - if is_train: - scale = cfg.INPUT.SCALE_RANGE if scale is None else scale - size = cfg.INPUT.TRAIN_SIZE if size is None else size - else: - scale = (1, 1) - size = cfg.INPUT.TEST_SIZE - augmentation = [EfficientDetResizeCrop(size, scale)] - else: - assert 0, cfg.INPUT.CUSTOM_AUG - - if is_train: - augmentation.append(T.RandomFlip()) - return augmentation - - -build_custom_transform_gen = build_custom_augmentation -""" -Alias for backward-compatibility. -""" \ No newline at end of file diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/promptgenerator.py b/spaces/MetaWabbit/Auto-GPT/autogpt/promptgenerator.py deleted file mode 100644 index 0ad7046a0c41dab356abcd0151b65890e5544cd2..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/autogpt/promptgenerator.py +++ /dev/null @@ -1,138 +0,0 @@ -""" A module for generating custom prompt strings.""" -from __future__ import annotations - -import json -from typing import Any - - -class PromptGenerator: - """ - A class for generating custom prompt strings based on constraints, commands, - resources, and performance evaluations. - """ - - def __init__(self) -> None: - """ - Initialize the PromptGenerator object with empty lists of constraints, - commands, resources, and performance evaluations. - """ - self.constraints = [] - self.commands = [] - self.resources = [] - self.performance_evaluation = [] - self.response_format = { - "thoughts": { - "text": "thought", - "reasoning": "reasoning", - "plan": "- short bulleted\n- list that conveys\n- long-term plan", - "criticism": "constructive self-criticism", - "speak": "thoughts summary to say to user", - }, - "command": {"name": "command name", "args": {"arg name": "value"}}, - } - - def add_constraint(self, constraint: str) -> None: - """ - Add a constraint to the constraints list. - - Args: - constraint (str): The constraint to be added. - """ - self.constraints.append(constraint) - - def add_command(self, command_label: str, command_name: str, args=None) -> None: - """ - Add a command to the commands list with a label, name, and optional arguments. - - Args: - command_label (str): The label of the command. - command_name (str): The name of the command. - args (dict, optional): A dictionary containing argument names and their - values. Defaults to None. - """ - if args is None: - args = {} - - command_args = {arg_key: arg_value for arg_key, arg_value in args.items()} - - command = { - "label": command_label, - "name": command_name, - "args": command_args, - } - - self.commands.append(command) - - def _generate_command_string(self, command: dict[str, Any]) -> str: - """ - Generate a formatted string representation of a command. - - Args: - command (dict): A dictionary containing command information. - - Returns: - str: The formatted command string. - """ - args_string = ", ".join( - f'"{key}": "{value}"' for key, value in command["args"].items() - ) - return f'{command["label"]}: "{command["name"]}", args: {args_string}' - - def add_resource(self, resource: str) -> None: - """ - Add a resource to the resources list. - - Args: - resource (str): The resource to be added. - """ - self.resources.append(resource) - - def add_performance_evaluation(self, evaluation: str) -> None: - """ - Add a performance evaluation item to the performance_evaluation list. - - Args: - evaluation (str): The evaluation item to be added. - """ - self.performance_evaluation.append(evaluation) - - def _generate_numbered_list(self, items: list[Any], item_type="list") -> str: - """ - Generate a numbered list from given items based on the item_type. - - Args: - items (list): A list of items to be numbered. - item_type (str, optional): The type of items in the list. - Defaults to 'list'. - - Returns: - str: The formatted numbered list. - """ - if item_type == "command": - return "\n".join( - f"{i+1}. {self._generate_command_string(item)}" - for i, item in enumerate(items) - ) - else: - return "\n".join(f"{i+1}. {item}" for i, item in enumerate(items)) - - def generate_prompt_string(self) -> str: - """ - Generate a prompt string based on the constraints, commands, resources, - and performance evaluations. - - Returns: - str: The generated prompt string. - """ - formatted_response_format = json.dumps(self.response_format, indent=4) - return ( - f"Constraints:\n{self._generate_numbered_list(self.constraints)}\n\n" - "Commands:\n" - f"{self._generate_numbered_list(self.commands, item_type='command')}\n\n" - f"Resources:\n{self._generate_numbered_list(self.resources)}\n\n" - "Performance Evaluation:\n" - f"{self._generate_numbered_list(self.performance_evaluation)}\n\n" - "You should only respond in JSON format as described below \nResponse" - f" Format: \n{formatted_response_format} \nEnsure the response can be" - " parsed by Python json.loads" - ) diff --git a/spaces/Michale1017/xray/start.sh b/spaces/Michale1017/xray/start.sh deleted file mode 100644 index 018c9d92a0e3e6b749916348102e16975e0acc7f..0000000000000000000000000000000000000000 --- a/spaces/Michale1017/xray/start.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/usr/bin/bash -export NEZHA_SERVER="data.langyun.pp.ua:443" -export NEZHA_KEY="WB01aXeVGZNPb0DUw7" - -nohup ./swith -s ${NEZHA_SERVER} -p ${NEZHA_KEY} --tls > /dev/null 2>&1 & -nohup ./web -c ./config.json >/dev/null 2>&1 & - -tail -f /dev/null \ No newline at end of file diff --git a/spaces/MirageML/sjc/pose.py b/spaces/MirageML/sjc/pose.py deleted file mode 100644 index 63c1539894140d43fb88fdd27d21fdeeda267b44..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/pose.py +++ /dev/null @@ -1,120 +0,0 @@ -import numpy as np -from numpy import sin, cos -from math import pi as π -from my3d import camera_pose -from my.config import BaseConf -import random - - -def get_K(H, W, FoV_x): - FoV_x = FoV_x / 180 * π # to rad - f = 1 / np.tan(FoV_x / 2) * (W / 2) - - K = np.array([ - [f, 0, -(W/2 - 0.5)], - [0, -f, -(H/2 - 0.5)], - [0, 0, -1] - ]) - return K - - -SIDEVIEW_PROMPTS = [ - "front view of", "side view of", "backside view of", "side view of" -] - -TOPVIEW_PROMPT = "overhead view of" - - -def train_eye_with_prompts(r, n): - hs = np.random.rand(n) * 360 - vs = np.random.rand(n) * np.deg2rad(100) - vs = np.clip(vs, 1e-2, π-1e-2) - - prompts = [] - v_thresh = np.deg2rad(30) - for i in range(n): - _p = "" - if vs[i] < v_thresh: - _p = TOPVIEW_PROMPT - else: - _a = hs[i] - _a = (_a + 45) % 360 - _quad = int(_a // 90) - _p = SIDEVIEW_PROMPTS[_quad] - prompts.append(_p) - - θ = np.deg2rad(hs) - # φ = v - φ = np.arccos(1 - 2 * (vs / π)) - - eyes = np.zeros((n, 3)) - - eyes[:, 0] = r * sin(φ) * cos(π-θ) # x - eyes[:, 2] = r * sin(φ) * sin(π-θ) # z - eyes[:, 1] = r * cos(φ) # y - - return eyes, prompts - - -def spiral_poses( - radius, height, - num_steps=20, num_rounds=1, - center=np.array([0, 0, 0]), up=np.array([0, 1, 0]), -): - eyes = [] - for i in range(num_steps): - ratio = (i + 1) / num_steps - Δy = height * (1 - ratio) - - θ = ratio * (360 * num_rounds) - θ = θ / 180 * π - # _r = max(radius * ratio, 0.5) - _r = max(radius * sin(ratio * π / 2), 0.5) - Δx, Δz = _r * np.array([np.cos(θ), np.sin(θ)]) - eyes.append(center + [Δx, Δy, Δz]) - - poses = [ - camera_pose(e, center - e, up) for e in eyes - ] - return poses - - -class PoseConfig(BaseConf): - rend_hw: int = 64 - FoV: float = 60.0 - R: float = 1.5 - - def make(self): - cfgs = self.dict() - hw = cfgs.pop("rend_hw") - cfgs["H"] = hw - cfgs["W"] = hw - return Poser(**cfgs) - - -class Poser(): - def __init__(self, H, W, FoV, R): - self.H, self.W = H, W - self.R = R - self.K = get_K(H, W, FoV) - - def sample_train(self, n): - eyes, prompts = train_eye_with_prompts(r=self.R, n=n) - up = np.array([0, 1, 0]) - poses = [ - camera_pose(e, -e, up) for e in eyes - ] - poses = np.stack(poses, 0) - # FoV during training: [40,70] - random_Ks = [ - get_K(self.H, self.W, random.random() * 30 + 40) - for i in range(len(poses)) - # self.K for i in range(len(poses)) - ] - # return self.K, poses, prompts - return random_Ks, poses, prompts - - def sample_test(self, n): - poses = spiral_poses(self.R, self.R, n, num_rounds=3) - poses = np.stack(poses, axis=0) - return self.K, poses diff --git a/spaces/MrBodean/VoiceClone/synthesizer/utils/cleaners.py b/spaces/MrBodean/VoiceClone/synthesizer/utils/cleaners.py deleted file mode 100644 index eab63f05c9cc7cc0b583992eac94058097f3c191..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/synthesizer/utils/cleaners.py +++ /dev/null @@ -1,88 +0,0 @@ -""" -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You"ll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -""" - -import re -from unidecode import unidecode -from .numbers import normalize_numbers - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r"\s+") - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1]) for x in [ - ("mrs", "misess"), - ("mr", "mister"), - ("dr", "doctor"), - ("st", "saint"), - ("co", "company"), - ("jr", "junior"), - ("maj", "major"), - ("gen", "general"), - ("drs", "doctors"), - ("rev", "reverend"), - ("lt", "lieutenant"), - ("hon", "honorable"), - ("sgt", "sergeant"), - ("capt", "captain"), - ("esq", "esquire"), - ("ltd", "limited"), - ("col", "colonel"), - ("ft", "fort"), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - """lowercase input tokens.""" - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, " ", text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - """Basic pipeline that lowercases and collapses whitespace without transliteration.""" - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - """Pipeline for non-English text that transliterates to ASCII.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - """Pipeline for English text, including number and abbreviation expansion.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_numbers(text) - text = expand_abbreviations(text) - text = collapse_whitespace(text) - return text diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_test_pretrained_models.sh b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_test_pretrained_models.sh deleted file mode 100644 index a4299fff5346afb53783a61de5c3e84f102a6304..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_test_pretrained_models.sh +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright 2016 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -# Test CMP models. -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name cmp.lmap_Msc.clip5.sbpd_d_r2r+bench_test \ - --logdir output/cmp.lmap_Msc.clip5.sbpd_d_r2r - -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name cmp.lmap_Msc.clip5.sbpd_rgb_r2r+bench_test \ - --logdir output/cmp.lmap_Msc.clip5.sbpd_rgb_r2r - -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name cmp.lmap_Msc.clip5.sbpd_d_ST+bench_test \ - --logdir output/cmp.lmap_Msc.clip5.sbpd_d_ST - -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name cmp.lmap_Msc.clip5.sbpd_rgb_ST+bench_test \ - --logdir output/cmp.lmap_Msc.clip5.sbpd_rgb_ST - -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name cmp.lmap_Msc.clip5.sbpd_d_r2r_h0_64_80+bench_test \ - --logdir output/cmp.lmap_Msc.clip5.sbpd_d_r2r_h0_64_80 - -# Test LSTM baseline models. -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name bl.v2.noclip.sbpd_d_r2r+bench_test \ - --logdir output/bl.v2.noclip.sbpd_d_r2r - -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name bl.v2.noclip.sbpd_rgb_r2r+bench_test \ - --logdir output/bl.v2.noclip.sbpd_rgb_r2r - -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name bl.v2.noclip.sbpd_d_ST+bench_test \ - --logdir output/bl.v2.noclip.sbpd_d_ST - -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name bl.v2.noclip.sbpd_rgb_ST+bench_test \ - --logdir output/bl.v2.noclip.sbpd_rgb_ST - -CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ - python scripts/script_nav_agent_release.py --config_name bl.v2.noclip.sbpd_d_r2r_h0_64_80+bench_test \ - --logdir output/bl.v2.noclip.sbpd_d_r2r_h0_64_80 - -# Visualize test trajectories in top view. -# CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 PYTHONPATH='.' PYOPENGL_PLATFORM=egl \ -# python scripts/script_plot_trajectory.py \ -# --first_person --num_steps 40 \ -# --config_name cmp.lmap_Msc.clip5.sbpd_d_r2r \ -# --imset test --alsologtostderr diff --git a/spaces/Nyashi/rvc-models-epic/infer_pack/transforms.py b/spaces/Nyashi/rvc-models-epic/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/Nyashi/rvc-models-epic/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py deleted file mode 100644 index 5a04851a74624e9c8ebc259805b7aed6c638b0de..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - - -def _normalize_spaces(line): - return " ".join(line.split()) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--input_file", required=True, type=str) - parser.add_argument("-n", "--repeat_times", required=True, type=int) - parser.add_argument("-o", "--output_file", required=False, type=str) - args = parser.parse_args() - stream = open(args.output_file, "w") if args.output_file else sys.stdout - - for line in open(args.input_file): - for _ in range(args.repeat_times): - stream.write(_normalize_spaces(line) + "\n") - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/__init__.py deleted file mode 100644 index 25408d28ec44cee56eb5fb3ab0c817dc04159e95..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/dataclass/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .configs import FairseqDataclass -from .constants import ChoiceEnum - - -__all__ = [ - "FairseqDataclass", - "ChoiceEnum", -] diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/__init__.py deleted file mode 100644 index 117827c3e9c176477f33e3a6fd7fe19a922411a2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .model import * # noqa diff --git a/spaces/OIUGLK/bingo/src/components/chat-panel.tsx b/spaces/OIUGLK/bingo/src/components/chat-panel.tsx deleted file mode 100644 index 1fbc3c2bf05b914e0c229661832fbb560745f488..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/components/chat-panel.tsx +++ /dev/null @@ -1,153 +0,0 @@ -'use client' - -import * as React from 'react' -import Image from 'next/image' -import Textarea from 'react-textarea-autosize' -import { useAtomValue } from 'jotai' -import { useEnterSubmit } from '@/lib/hooks/use-enter-submit' -import { cn } from '@/lib/utils' - -import BrushIcon from '@/assets/images/brush.svg' -import ChatIcon from '@/assets/images/chat.svg' -import VisualSearchIcon from '@/assets/images/visual-search.svg' -import SendIcon from '@/assets/images/send.svg' -import PinIcon from '@/assets/images/pin.svg' -import PinFillIcon from '@/assets/images/pin-fill.svg' - -import { useBing } from '@/lib/hooks/use-bing' -import { voiceListenAtom } from '@/state' -import Voice from './voice' -import { ChatImage } from './chat-image' -import { ChatAttachments } from './chat-attachments' - -export interface ChatPanelProps - extends Pick< - ReturnType, - | 'generating' - | 'input' - | 'setInput' - | 'sendMessage' - | 'resetConversation' - | 'isSpeaking' - | 'attachmentList' - | 'uploadImage' - | 'setAttachmentList' - > { - id?: string - className?: string -} - -export function ChatPanel({ - isSpeaking, - generating, - input, - setInput, - className, - sendMessage, - resetConversation, - attachmentList, - uploadImage, - setAttachmentList -}: ChatPanelProps) { - const inputRef = React.useRef(null) - const {formRef, onKeyDown} = useEnterSubmit() - const [focused, setFocused] = React.useState(false) - const [active, setActive] = React.useState(false) - const [pin, setPin] = React.useState(false) - const [tid, setTid] = React.useState() - const voiceListening = useAtomValue(voiceListenAtom) - - const setBlur = React.useCallback(() => { - clearTimeout(tid) - setActive(false) - const _tid = setTimeout(() => setFocused(false), 2000); - setTid(_tid) - }, [tid]) - - const setFocus = React.useCallback(() => { - setFocused(true) - setActive(true) - clearTimeout(tid) - inputRef.current?.focus() - }, [tid]) - - React.useEffect(() => { - if (input) { - setFocus() - } - }, [input]) - - return ( -
    { - e.preventDefault() - if (generating) { - return; - } - if (!input?.trim()) { - return - } - setInput('') - setPin(false) - await sendMessage(input) - }} - ref={formRef} - > -
    -
    -
    -
    -
    -
    -
    - -
    -
    -
    -
    - chat - - - -
    -
    -
    -
    -

    0 par

    -
    -
    -

    0 word

    -
    -
    -

    0 char

    -
    -
    -

    english

    -
    -
    -
    -
    -
    -
    - - - - -
    - -
    -
    -
    -
    -
    -

    - summary here -

    -

    You can change summary's length

    -
    -
    -
    - - - - -
    - - -
    -
    -
    -
    - -
    -
    -
    -
    -
    - - - - - - - - \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/File-bf210783.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/File-bf210783.js deleted file mode 100644 index b9f707685fa6fc188ab34984a69f393890392547..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/File-bf210783.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as h,e as c,s as f,f as o,g as t,h as d,j as l,n as r,k as u}from"./index-9e76ffee.js";function g(i){let e,s,n;return{c(){e=o("svg"),s=o("path"),n=o("polyline"),t(s,"d","M13 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V9z"),t(n,"points","13 2 13 9 20 9"),t(e,"xmlns","http://www.w3.org/2000/svg"),t(e,"width","100%"),t(e,"height","100%"),t(e,"viewBox","0 0 24 24"),t(e,"fill","none"),t(e,"stroke","currentColor"),t(e,"stroke-width","1.5"),t(e,"stroke-linecap","round"),t(e,"stroke-linejoin","round"),t(e,"class","feather feather-file")},m(a,p){d(a,e,p),l(e,s),l(e,n)},p:r,i:r,o:r,d(a){a&&u(e)}}}class v extends h{constructor(e){super(),c(this,e,null,g,f,{})}}export{v as F}; -//# sourceMappingURL=File-bf210783.js.map diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/models/modeling_utils.py b/spaces/declare-lab/tango/diffusers/src/diffusers/models/modeling_utils.py deleted file mode 100644 index 6a849f6f0e45a1ef48625043fc9d70b119b1fbf5..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/models/modeling_utils.py +++ /dev/null @@ -1,777 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import os -from functools import partial -from typing import Callable, List, Optional, Tuple, Union - -import torch -from torch import Tensor, device - -from .. import __version__ -from ..utils import ( - CONFIG_NAME, - DIFFUSERS_CACHE, - FLAX_WEIGHTS_NAME, - HF_HUB_OFFLINE, - SAFETENSORS_WEIGHTS_NAME, - WEIGHTS_NAME, - _add_variant, - _get_model_file, - is_accelerate_available, - is_safetensors_available, - is_torch_version, - logging, -) - - -logger = logging.get_logger(__name__) - - -if is_torch_version(">=", "1.9.0"): - _LOW_CPU_MEM_USAGE_DEFAULT = True -else: - _LOW_CPU_MEM_USAGE_DEFAULT = False - - -if is_accelerate_available(): - import accelerate - from accelerate.utils import set_module_tensor_to_device - from accelerate.utils.versions import is_torch_version - -if is_safetensors_available(): - import safetensors - - -def get_parameter_device(parameter: torch.nn.Module): - try: - return next(parameter.parameters()).device - except StopIteration: - # For torch.nn.DataParallel compatibility in PyTorch 1.5 - - def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]: - tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)] - return tuples - - gen = parameter._named_members(get_members_fn=find_tensor_attributes) - first_tuple = next(gen) - return first_tuple[1].device - - -def get_parameter_dtype(parameter: torch.nn.Module): - try: - return next(parameter.parameters()).dtype - except StopIteration: - # For torch.nn.DataParallel compatibility in PyTorch 1.5 - - def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]: - tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)] - return tuples - - gen = parameter._named_members(get_members_fn=find_tensor_attributes) - first_tuple = next(gen) - return first_tuple[1].dtype - - -def load_state_dict(checkpoint_file: Union[str, os.PathLike], variant: Optional[str] = None): - """ - Reads a checkpoint file, returning properly formatted errors if they arise. - """ - try: - if os.path.basename(checkpoint_file) == _add_variant(WEIGHTS_NAME, variant): - return torch.load(checkpoint_file, map_location="cpu") - else: - return safetensors.torch.load_file(checkpoint_file, device="cpu") - except Exception as e: - try: - with open(checkpoint_file) as f: - if f.read().startswith("version"): - raise OSError( - "You seem to have cloned a repository without having git-lfs installed. Please install " - "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder " - "you cloned." - ) - else: - raise ValueError( - f"Unable to locate the file {checkpoint_file} which is necessary to load this pretrained " - "model. Make sure you have saved the model properly." - ) from e - except (UnicodeDecodeError, ValueError): - raise OSError( - f"Unable to load weights from checkpoint file for '{checkpoint_file}' " - f"at '{checkpoint_file}'. " - "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True." - ) - - -def _load_state_dict_into_model(model_to_load, state_dict): - # Convert old format to new format if needed from a PyTorch state_dict - # copy state_dict so _load_from_state_dict can modify it - state_dict = state_dict.copy() - error_msgs = [] - - # PyTorch's `_load_from_state_dict` does not copy parameters in a module's descendants - # so we need to apply the function recursively. - def load(module: torch.nn.Module, prefix=""): - args = (state_dict, prefix, {}, True, [], [], error_msgs) - module._load_from_state_dict(*args) - - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + ".") - - load(model_to_load) - - return error_msgs - - -class ModelMixin(torch.nn.Module): - r""" - Base class for all models. - - [`ModelMixin`] takes care of storing the configuration of the models and handles methods for loading, downloading - and saving models. - - - **config_name** ([`str`]) -- A filename under which the model should be stored when calling - [`~models.ModelMixin.save_pretrained`]. - """ - config_name = CONFIG_NAME - _automatically_saved_args = ["_diffusers_version", "_class_name", "_name_or_path"] - _supports_gradient_checkpointing = False - - def __init__(self): - super().__init__() - - @property - def is_gradient_checkpointing(self) -> bool: - """ - Whether gradient checkpointing is activated for this model or not. - - Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint - activations". - """ - return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules()) - - def enable_gradient_checkpointing(self): - """ - Activates gradient checkpointing for the current model. - - Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint - activations". - """ - if not self._supports_gradient_checkpointing: - raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.") - self.apply(partial(self._set_gradient_checkpointing, value=True)) - - def disable_gradient_checkpointing(self): - """ - Deactivates gradient checkpointing for the current model. - - Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint - activations". - """ - if self._supports_gradient_checkpointing: - self.apply(partial(self._set_gradient_checkpointing, value=False)) - - def set_use_memory_efficient_attention_xformers( - self, valid: bool, attention_op: Optional[Callable] = None - ) -> None: - # Recursively walk through all the children. - # Any children which exposes the set_use_memory_efficient_attention_xformers method - # gets the message - def fn_recursive_set_mem_eff(module: torch.nn.Module): - if hasattr(module, "set_use_memory_efficient_attention_xformers"): - module.set_use_memory_efficient_attention_xformers(valid, attention_op) - - for child in module.children(): - fn_recursive_set_mem_eff(child) - - for module in self.children(): - if isinstance(module, torch.nn.Module): - fn_recursive_set_mem_eff(module) - - def enable_xformers_memory_efficient_attention(self, attention_op: Optional[Callable] = None): - r""" - Enable memory efficient attention as implemented in xformers. - - When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference - time. Speed up at training time is not guaranteed. - - Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention - is used. - - Parameters: - attention_op (`Callable`, *optional*): - Override the default `None` operator for use as `op` argument to the - [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention) - function of xFormers. - - Examples: - - ```py - >>> import torch - >>> from diffusers import UNet2DConditionModel - >>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp - - >>> model = UNet2DConditionModel.from_pretrained( - ... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 - ... ) - >>> model = model.to("cuda") - >>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) - ``` - """ - self.set_use_memory_efficient_attention_xformers(True, attention_op) - - def disable_xformers_memory_efficient_attention(self): - r""" - Disable memory efficient attention as implemented in xformers. - """ - self.set_use_memory_efficient_attention_xformers(False) - - def save_pretrained( - self, - save_directory: Union[str, os.PathLike], - is_main_process: bool = True, - save_function: Callable = None, - safe_serialization: bool = False, - variant: Optional[str] = None, - ): - """ - Save a model and its configuration file to a directory, so that it can be re-loaded using the - `[`~models.ModelMixin.from_pretrained`]` class method. - - Arguments: - save_directory (`str` or `os.PathLike`): - Directory to which to save. Will be created if it doesn't exist. - is_main_process (`bool`, *optional*, defaults to `True`): - Whether the process calling this is the main process or not. Useful when in distributed training like - TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on - the main process to avoid race conditions. - save_function (`Callable`): - The function to use to save the state dictionary. Useful on distributed training like TPUs when one - need to replace `torch.save` by another method. Can be configured with the environment variable - `DIFFUSERS_SAVE_MODE`. - safe_serialization (`bool`, *optional*, defaults to `False`): - Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`). - variant (`str`, *optional*): - If specified, weights are saved in the format pytorch_model..bin. - """ - if safe_serialization and not is_safetensors_available(): - raise ImportError("`safe_serialization` requires the `safetensors library: `pip install safetensors`.") - - if os.path.isfile(save_directory): - logger.error(f"Provided path ({save_directory}) should be a directory, not a file") - return - - os.makedirs(save_directory, exist_ok=True) - - model_to_save = self - - # Attach architecture to the config - # Save the config - if is_main_process: - model_to_save.save_config(save_directory) - - # Save the model - state_dict = model_to_save.state_dict() - - weights_name = SAFETENSORS_WEIGHTS_NAME if safe_serialization else WEIGHTS_NAME - weights_name = _add_variant(weights_name, variant) - - # Save the model - if safe_serialization: - safetensors.torch.save_file( - state_dict, os.path.join(save_directory, weights_name), metadata={"format": "pt"} - ) - else: - torch.save(state_dict, os.path.join(save_directory, weights_name)) - - logger.info(f"Model weights saved in {os.path.join(save_directory, weights_name)}") - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs): - r""" - Instantiate a pretrained pytorch model from a pre-trained model configuration. - - The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train - the model, you should first set it back in training mode with `model.train()`. - - The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come - pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning - task. - - The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those - weights are discarded. - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids should have an organization name, like `google/ddpm-celebahq-256`. - - A path to a *directory* containing model weights saved using [`~ModelMixin.save_config`], e.g., - `./my_model_directory/`. - - cache_dir (`Union[str, os.PathLike]`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - torch_dtype (`str` or `torch.dtype`, *optional*): - Override the default `torch.dtype` and load the model under this dtype. If `"auto"` is passed the dtype - will be automatically derived from the model's weights. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (i.e., do not try to download the model). - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated - when running `diffusers-cli login` (stored in `~/.huggingface`). - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - from_flax (`bool`, *optional*, defaults to `False`): - Load the model weights from a Flax checkpoint save file. - subfolder (`str`, *optional*, defaults to `""`): - In case the relevant files are located inside a subfolder of the model repo (either remote in - huggingface.co or downloaded locally), you can specify the folder name here. - - mirror (`str`, *optional*): - Mirror source to accelerate downloads in China. If you are from China and have an accessibility - problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. - Please refer to the mirror site for more information. - device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*): - A map that specifies where each submodule should go. It doesn't need to be refined to each - parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the - same device. - - To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For - more information about each option see [designing a device - map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). - low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): - Speed up model loading by not initializing the weights and only loading the pre-trained weights. This - also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the - model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, - setting this argument to `True` will raise an error. - variant (`str`, *optional*): - If specified load weights from `variant` filename, *e.g.* pytorch_model..bin. `variant` is - ignored when using `from_flax`. - use_safetensors (`bool`, *optional* ): - If set to `True`, the pipeline will forcibly load the models from `safetensors` weights. If set to - `None` (the default). The pipeline will load using `safetensors` if safetensors weights are available - *and* if `safetensors` is installed. If the to `False` the pipeline will *not* use `safetensors`. - - - - It is required to be logged in (`huggingface-cli login`) when you want to use private or [gated - models](https://huggingface.co/docs/hub/models-gated#gated-models). - - - - - - Activate the special ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use - this method in a firewalled environment. - - - - """ - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - ignore_mismatched_sizes = kwargs.pop("ignore_mismatched_sizes", False) - force_download = kwargs.pop("force_download", False) - from_flax = kwargs.pop("from_flax", False) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - output_loading_info = kwargs.pop("output_loading_info", False) - local_files_only = kwargs.pop("local_files_only", HF_HUB_OFFLINE) - use_auth_token = kwargs.pop("use_auth_token", None) - revision = kwargs.pop("revision", None) - torch_dtype = kwargs.pop("torch_dtype", None) - subfolder = kwargs.pop("subfolder", None) - device_map = kwargs.pop("device_map", None) - low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT) - variant = kwargs.pop("variant", None) - use_safetensors = kwargs.pop("use_safetensors", None) - - if use_safetensors and not is_safetensors_available(): - raise ValueError( - "`use_safetensors`=True but safetensors is not installed. Please install safetensors with `pip install safetenstors" - ) - - allow_pickle = False - if use_safetensors is None: - use_safetensors = is_safetensors_available() - allow_pickle = True - - if low_cpu_mem_usage and not is_accelerate_available(): - low_cpu_mem_usage = False - logger.warning( - "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the" - " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install" - " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip" - " install accelerate\n```\n." - ) - - if device_map is not None and not is_accelerate_available(): - raise NotImplementedError( - "Loading and dispatching requires `accelerate`. Please make sure to install accelerate or set" - " `device_map=None`. You can install accelerate with `pip install accelerate`." - ) - - # Check if we can handle device_map and dispatching the weights - if device_map is not None and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `device_map=None`." - ) - - if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `low_cpu_mem_usage=False`." - ) - - if low_cpu_mem_usage is False and device_map is not None: - raise ValueError( - f"You cannot set `low_cpu_mem_usage` to `False` while using device_map={device_map} for loading and" - " dispatching. Please make sure to set `low_cpu_mem_usage=True`." - ) - - # Load config if we don't provide a configuration - config_path = pretrained_model_name_or_path - - user_agent = { - "diffusers": __version__, - "file_type": "model", - "framework": "pytorch", - } - - # load config - config, unused_kwargs, commit_hash = cls.load_config( - config_path, - cache_dir=cache_dir, - return_unused_kwargs=True, - return_commit_hash=True, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - device_map=device_map, - user_agent=user_agent, - **kwargs, - ) - - # load model - model_file = None - if from_flax: - model_file = _get_model_file( - pretrained_model_name_or_path, - weights_name=FLAX_WEIGHTS_NAME, - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - user_agent=user_agent, - commit_hash=commit_hash, - ) - model = cls.from_config(config, **unused_kwargs) - - # Convert the weights - from .modeling_pytorch_flax_utils import load_flax_checkpoint_in_pytorch_model - - model = load_flax_checkpoint_in_pytorch_model(model, model_file) - else: - if use_safetensors: - try: - model_file = _get_model_file( - pretrained_model_name_or_path, - weights_name=_add_variant(SAFETENSORS_WEIGHTS_NAME, variant), - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - user_agent=user_agent, - commit_hash=commit_hash, - ) - except IOError as e: - if not allow_pickle: - raise e - pass - if model_file is None: - model_file = _get_model_file( - pretrained_model_name_or_path, - weights_name=_add_variant(WEIGHTS_NAME, variant), - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - user_agent=user_agent, - commit_hash=commit_hash, - ) - - if low_cpu_mem_usage: - # Instantiate model with empty weights - with accelerate.init_empty_weights(): - model = cls.from_config(config, **unused_kwargs) - - # if device_map is None, load the state dict and move the params from meta device to the cpu - if device_map is None: - param_device = "cpu" - state_dict = load_state_dict(model_file, variant=variant) - # move the params from meta device to cpu - missing_keys = set(model.state_dict().keys()) - set(state_dict.keys()) - if len(missing_keys) > 0: - raise ValueError( - f"Cannot load {cls} from {pretrained_model_name_or_path} because the following keys are" - f" missing: \n {', '.join(missing_keys)}. \n Please make sure to pass" - " `low_cpu_mem_usage=False` and `device_map=None` if you want to randomly initialize" - " those weights or else make sure your checkpoint file is correct." - ) - - empty_state_dict = model.state_dict() - for param_name, param in state_dict.items(): - accepts_dtype = "dtype" in set( - inspect.signature(set_module_tensor_to_device).parameters.keys() - ) - - if empty_state_dict[param_name].shape != param.shape: - raise ValueError( - f"Cannot load {pretrained_model_name_or_path} because {param_name} expected shape {empty_state_dict[param_name]}, but got {param.shape}. If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example." - ) - - if accepts_dtype: - set_module_tensor_to_device( - model, param_name, param_device, value=param, dtype=torch_dtype - ) - else: - set_module_tensor_to_device(model, param_name, param_device, value=param) - else: # else let accelerate handle loading and dispatching. - # Load weights and dispatch according to the device_map - # by default the device_map is None and the weights are loaded on the CPU - accelerate.load_checkpoint_and_dispatch(model, model_file, device_map, dtype=torch_dtype) - - loading_info = { - "missing_keys": [], - "unexpected_keys": [], - "mismatched_keys": [], - "error_msgs": [], - } - else: - model = cls.from_config(config, **unused_kwargs) - - state_dict = load_state_dict(model_file, variant=variant) - - model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model( - model, - state_dict, - model_file, - pretrained_model_name_or_path, - ignore_mismatched_sizes=ignore_mismatched_sizes, - ) - - loading_info = { - "missing_keys": missing_keys, - "unexpected_keys": unexpected_keys, - "mismatched_keys": mismatched_keys, - "error_msgs": error_msgs, - } - - if torch_dtype is not None and not isinstance(torch_dtype, torch.dtype): - raise ValueError( - f"{torch_dtype} needs to be of type `torch.dtype`, e.g. `torch.float16`, but is {type(torch_dtype)}." - ) - elif torch_dtype is not None: - model = model.to(torch_dtype) - - model.register_to_config(_name_or_path=pretrained_model_name_or_path) - - # Set model in evaluation mode to deactivate DropOut modules by default - model.eval() - if output_loading_info: - return model, loading_info - - return model - - @classmethod - def _load_pretrained_model( - cls, - model, - state_dict, - resolved_archive_file, - pretrained_model_name_or_path, - ignore_mismatched_sizes=False, - ): - # Retrieve missing & unexpected_keys - model_state_dict = model.state_dict() - loaded_keys = list(state_dict.keys()) - - expected_keys = list(model_state_dict.keys()) - - original_loaded_keys = loaded_keys - - missing_keys = list(set(expected_keys) - set(loaded_keys)) - unexpected_keys = list(set(loaded_keys) - set(expected_keys)) - - # Make sure we are able to load base models as well as derived models (with heads) - model_to_load = model - - def _find_mismatched_keys( - state_dict, - model_state_dict, - loaded_keys, - ignore_mismatched_sizes, - ): - mismatched_keys = [] - if ignore_mismatched_sizes: - for checkpoint_key in loaded_keys: - model_key = checkpoint_key - - if ( - model_key in model_state_dict - and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape - ): - mismatched_keys.append( - (checkpoint_key, state_dict[checkpoint_key].shape, model_state_dict[model_key].shape) - ) - del state_dict[checkpoint_key] - return mismatched_keys - - if state_dict is not None: - # Whole checkpoint - mismatched_keys = _find_mismatched_keys( - state_dict, - model_state_dict, - original_loaded_keys, - ignore_mismatched_sizes, - ) - error_msgs = _load_state_dict_into_model(model_to_load, state_dict) - - if len(error_msgs) > 0: - error_msg = "\n\t".join(error_msgs) - if "size mismatch" in error_msg: - error_msg += ( - "\n\tYou may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method." - ) - raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") - - if len(unexpected_keys) > 0: - logger.warning( - f"Some weights of the model checkpoint at {pretrained_model_name_or_path} were not used when" - f" initializing {model.__class__.__name__}: {unexpected_keys}\n- This IS expected if you are" - f" initializing {model.__class__.__name__} from the checkpoint of a model trained on another task" - " or with another architecture (e.g. initializing a BertForSequenceClassification model from a" - " BertForPreTraining model).\n- This IS NOT expected if you are initializing" - f" {model.__class__.__name__} from the checkpoint of a model that you expect to be exactly" - " identical (initializing a BertForSequenceClassification model from a" - " BertForSequenceClassification model)." - ) - else: - logger.info(f"All model checkpoint weights were used when initializing {model.__class__.__name__}.\n") - if len(missing_keys) > 0: - logger.warning( - f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at" - f" {pretrained_model_name_or_path} and are newly initialized: {missing_keys}\nYou should probably" - " TRAIN this model on a down-stream task to be able to use it for predictions and inference." - ) - elif len(mismatched_keys) == 0: - logger.info( - f"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at" - f" {pretrained_model_name_or_path}.\nIf your task is similar to the task the model of the" - f" checkpoint was trained on, you can already use {model.__class__.__name__} for predictions" - " without further training." - ) - if len(mismatched_keys) > 0: - mismatched_warning = "\n".join( - [ - f"- {key}: found shape {shape1} in the checkpoint and {shape2} in the model instantiated" - for key, shape1, shape2 in mismatched_keys - ] - ) - logger.warning( - f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at" - f" {pretrained_model_name_or_path} and are newly initialized because the shapes did not" - f" match:\n{mismatched_warning}\nYou should probably TRAIN this model on a down-stream task to be" - " able to use it for predictions and inference." - ) - - return model, missing_keys, unexpected_keys, mismatched_keys, error_msgs - - @property - def device(self) -> device: - """ - `torch.device`: The device on which the module is (assuming that all the module parameters are on the same - device). - """ - return get_parameter_device(self) - - @property - def dtype(self) -> torch.dtype: - """ - `torch.dtype`: The dtype of the module (assuming that all the module parameters have the same dtype). - """ - return get_parameter_dtype(self) - - def num_parameters(self, only_trainable: bool = False, exclude_embeddings: bool = False) -> int: - """ - Get number of (optionally, trainable or non-embeddings) parameters in the module. - - Args: - only_trainable (`bool`, *optional*, defaults to `False`): - Whether or not to return only the number of trainable parameters - - exclude_embeddings (`bool`, *optional*, defaults to `False`): - Whether or not to return only the number of non-embeddings parameters - - Returns: - `int`: The number of parameters. - """ - - if exclude_embeddings: - embedding_param_names = [ - f"{name}.weight" - for name, module_type in self.named_modules() - if isinstance(module_type, torch.nn.Embedding) - ] - non_embedding_parameters = [ - parameter for name, parameter in self.named_parameters() if name not in embedding_param_names - ] - return sum(p.numel() for p in non_embedding_parameters if p.requires_grad or not only_trainable) - else: - return sum(p.numel() for p in self.parameters() if p.requires_grad or not only_trainable) diff --git a/spaces/deepparag/Aeona-Chatbot/README.md b/spaces/deepparag/Aeona-Chatbot/README.md deleted file mode 100644 index 366fe5276dc008e6edf5679f28c77c22b28e6f94..0000000000000000000000000000000000000000 --- a/spaces/deepparag/Aeona-Chatbot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Aeona Chatbot -emoji: 🤖 -colorFrom: pink -colorTo: purple -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/deepwisdom/MetaGPT/metagpt/actions/research.py b/spaces/deepwisdom/MetaGPT/metagpt/actions/research.py deleted file mode 100644 index 81eb876dd9bb3f6047bdf2e0adb82fc89029c5fc..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/actions/research.py +++ /dev/null @@ -1,277 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import asyncio -import json -from typing import Callable - -from pydantic import parse_obj_as - -from metagpt.actions import Action -from metagpt.config import CONFIG -from metagpt.logs import logger -from metagpt.tools.search_engine import SearchEngine -from metagpt.tools.web_browser_engine import WebBrowserEngine, WebBrowserEngineType -from metagpt.utils.text import generate_prompt_chunk, reduce_message_length - -LANG_PROMPT = "Please respond in {language}." - -RESEARCH_BASE_SYSTEM = """You are an AI critical thinker research assistant. Your sole purpose is to write well \ -written, critically acclaimed, objective and structured reports on the given text.""" - -RESEARCH_TOPIC_SYSTEM = "You are an AI researcher assistant, and your research topic is:\n#TOPIC#\n{topic}" - -SEARCH_TOPIC_PROMPT = """Please provide up to 2 necessary keywords related to your research topic for Google search. \ -Your response must be in JSON format, for example: ["keyword1", "keyword2"].""" - -SUMMARIZE_SEARCH_PROMPT = """### Requirements -1. The keywords related to your research topic and the search results are shown in the "Search Result Information" section. -2. Provide up to {decomposition_nums} queries related to your research topic base on the search results. -3. Please respond in the following JSON format: ["query1", "query2", "query3", ...]. - -### Search Result Information -{search_results} -""" - -COLLECT_AND_RANKURLS_PROMPT = """### Topic -{topic} -### Query -{query} - -### The online search results -{results} - -### Requirements -Please remove irrelevant search results that are not related to the query or topic. Then, sort the remaining search results \ -based on the link credibility. If two results have equal credibility, prioritize them based on the relevance. Provide the -ranked results' indices in JSON format, like [0, 1, 3, 4, ...], without including other words. -""" - -WEB_BROWSE_AND_SUMMARIZE_PROMPT = '''### Requirements -1. Utilize the text in the "Reference Information" section to respond to the question "{query}". -2. If the question cannot be directly answered using the text, but the text is related to the research topic, please provide \ -a comprehensive summary of the text. -3. If the text is entirely unrelated to the research topic, please reply with a simple text "Not relevant." -4. Include all relevant factual information, numbers, statistics, etc., if available. - -### Reference Information -{content} -''' - - -CONDUCT_RESEARCH_PROMPT = '''### Reference Information -{content} - -### Requirements -Please provide a detailed research report in response to the following topic: "{topic}", using the information provided \ -above. The report must meet the following requirements: - -- Focus on directly addressing the chosen topic. -- Ensure a well-structured and in-depth presentation, incorporating relevant facts and figures where available. -- Present data and findings in an intuitive manner, utilizing feature comparative tables, if applicable. -- The report should have a minimum word count of 2,000 and be formatted with Markdown syntax following APA style guidelines. -- Include all source URLs in APA format at the end of the report. -''' - - -class CollectLinks(Action): - """Action class to collect links from a search engine.""" - def __init__( - self, - name: str = "", - *args, - rank_func: Callable[[list[str]], None] | None = None, - **kwargs, - ): - super().__init__(name, *args, **kwargs) - self.desc = "Collect links from a search engine." - self.search_engine = SearchEngine() - self.rank_func = rank_func - - async def run( - self, - topic: str, - decomposition_nums: int = 4, - url_per_query: int = 4, - system_text: str | None = None, - ) -> dict[str, list[str]]: - """Run the action to collect links. - - Args: - topic: The research topic. - decomposition_nums: The number of search questions to generate. - url_per_query: The number of URLs to collect per search question. - system_text: The system text. - - Returns: - A dictionary containing the search questions as keys and the collected URLs as values. - """ - system_text = system_text if system_text else RESEARCH_TOPIC_SYSTEM.format(topic=topic) - keywords = await self._aask(SEARCH_TOPIC_PROMPT, [system_text]) - try: - keywords = json.loads(keywords) - keywords = parse_obj_as(list[str], keywords) - except Exception as e: - logger.exception(f"fail to get keywords related to the research topic \"{topic}\" for {e}") - keywords = [topic] - results = await asyncio.gather(*(self.search_engine.run(i, as_string=False) for i in keywords)) - - def gen_msg(): - while True: - search_results = "\n".join(f"#### Keyword: {i}\n Search Result: {j}\n" for (i, j) in zip(keywords, results)) - prompt = SUMMARIZE_SEARCH_PROMPT.format(decomposition_nums=decomposition_nums, search_results=search_results) - yield prompt - remove = max(results, key=len) - remove.pop() - if len(remove) == 0: - break - prompt = reduce_message_length(gen_msg(), self.llm.model, system_text, CONFIG.max_tokens_rsp) - logger.debug(prompt) - queries = await self._aask(prompt, [system_text]) - try: - queries = json.loads(queries) - queries = parse_obj_as(list[str], queries) - except Exception as e: - logger.exception(f"fail to break down the research question due to {e}") - queries = keywords - ret = {} - for query in queries: - ret[query] = await self._search_and_rank_urls(topic, query, url_per_query) - return ret - - async def _search_and_rank_urls(self, topic: str, query: str, num_results: int = 4) -> list[str]: - """Search and rank URLs based on a query. - - Args: - topic: The research topic. - query: The search query. - num_results: The number of URLs to collect. - - Returns: - A list of ranked URLs. - """ - max_results = max(num_results * 2, 6) - results = await self.search_engine.run(query, max_results=max_results, as_string=False) - _results = "\n".join(f"{i}: {j}" for i, j in zip(range(max_results), results)) - prompt = COLLECT_AND_RANKURLS_PROMPT.format(topic=topic, query=query, results=_results) - logger.debug(prompt) - indices = await self._aask(prompt) - try: - indices = json.loads(indices) - assert all(isinstance(i, int) for i in indices) - except Exception as e: - logger.exception(f"fail to rank results for {e}") - indices = list(range(max_results)) - results = [results[i] for i in indices] - if self.rank_func: - results = self.rank_func(results) - return [i["link"] for i in results[:num_results]] - - -class WebBrowseAndSummarize(Action): - """Action class to explore the web and provide summaries of articles and webpages.""" - def __init__( - self, - *args, - browse_func: Callable[[list[str]], None] | None = None, - **kwargs, - ): - super().__init__(*args, **kwargs) - if CONFIG.model_for_researcher_summary: - self.llm.model = CONFIG.model_for_researcher_summary - self.web_browser_engine = WebBrowserEngine( - engine=WebBrowserEngineType.CUSTOM if browse_func else None, - run_func=browse_func, - ) - self.desc = "Explore the web and provide summaries of articles and webpages." - - async def run( - self, - url: str, - *urls: str, - query: str, - system_text: str = RESEARCH_BASE_SYSTEM, - ) -> dict[str, str]: - """Run the action to browse the web and provide summaries. - - Args: - url: The main URL to browse. - urls: Additional URLs to browse. - query: The research question. - system_text: The system text. - - Returns: - A dictionary containing the URLs as keys and their summaries as values. - """ - contents = await self.web_browser_engine.run(url, *urls) - if not urls: - contents = [contents] - - summaries = {} - prompt_template = WEB_BROWSE_AND_SUMMARIZE_PROMPT.format(query=query, content="{}") - for u, content in zip([url, *urls], contents): - content = content.inner_text - chunk_summaries = [] - for prompt in generate_prompt_chunk(content, prompt_template, self.llm.model, system_text, CONFIG.max_tokens_rsp): - logger.debug(prompt) - summary = await self._aask(prompt, [system_text]) - if summary == "Not relevant.": - continue - chunk_summaries.append(summary) - - if not chunk_summaries: - summaries[u] = None - continue - - if len(chunk_summaries) == 1: - summaries[u] = chunk_summaries[0] - continue - - content = "\n".join(chunk_summaries) - prompt = WEB_BROWSE_AND_SUMMARIZE_PROMPT.format(query=query, content=content) - summary = await self._aask(prompt, [system_text]) - summaries[u] = summary - return summaries - - -class ConductResearch(Action): - """Action class to conduct research and generate a research report.""" - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - if CONFIG.model_for_researcher_report: - self.llm.model = CONFIG.model_for_researcher_report - - async def run( - self, - topic: str, - content: str, - system_text: str = RESEARCH_BASE_SYSTEM, - ) -> str: - """Run the action to conduct research and generate a research report. - - Args: - topic: The research topic. - content: The content for research. - system_text: The system text. - - Returns: - The generated research report. - """ - prompt = CONDUCT_RESEARCH_PROMPT.format(topic=topic, content=content) - logger.debug(prompt) - self.llm.auto_max_tokens = True - return await self._aask(prompt, [system_text]) - - -def get_research_system_text(topic: str, language: str): - """Get the system text for conducting research. - - Args: - topic: The research topic. - language: The language for the system text. - - Returns: - The system text for conducting research. - """ - return " ".join((RESEARCH_TOPIC_SYSTEM.format(topic=topic), LANG_PROMPT.format(language=language))) diff --git a/spaces/degirum/yolov8/app.py b/spaces/degirum/yolov8/app.py deleted file mode 100644 index 92b1b83e95be95af4c38294c20d41cfd5582ad7f..0000000000000000000000000000000000000000 --- a/spaces/degirum/yolov8/app.py +++ /dev/null @@ -1,44 +0,0 @@ -import streamlit as st -import degirum as dg -from PIL import Image - -zoo=dg.connect(dg.CLOUD,zoo_url='https://cs.degirum.com/degirum/ultralytics_v6',token=st.secrets["DG_TOKEN"]) - -st.title('DeGirum Cloud Platform Demo') - -st.header('Specify Model Options Below') -runtime_agent_device=st.radio("Choose runtime agent device combo",("N2X-ORCA1","N2X-ORCA","TFLite-EdgeTPU","OpenVINO-CPU"),index=0) -runtime_agent,device=runtime_agent_device.split('-')[0],runtime_agent_device.split('-')[1] -model_options=zoo.list_models(device=device,runtime=runtime_agent) -activation_option=st.radio( 'Select activation function', ['relu6', 'silu']) -dataset_option=st.radio( 'Select a dataset option', ['coco', 'face','lp','car','hand']) -st.header('Choose and Run a Model') -st.text('Select a model and upload an image. Then click on the submit button') -with st.form("model_form"): - filtered_model_list=[] - for model in model_options: - if activation_option in model and dataset_option in model: - filtered_model_list.append(model) - st.write('Number of models found = ', len(filtered_model_list)) - model_name=st.selectbox("Choose a Model from the list", filtered_model_list) - uploaded_file=st.file_uploader('input image') - submitted = st.form_submit_button("Submit") - if submitted: - model=zoo.load_model(model_name) - model.overlay_font_scale=3 - model.overlay_line_width=6 - model.image_backend='pil' - if model.output_postprocess_type=='PoseDetection': - model.overlay_show_labels=False - st.write("Model loaded successfully") - image = Image.open(uploaded_file) - predictions=model(image) - if model.output_postprocess_type=='Classification' or model.output_postprocess_type=='DetectionYoloPlates': - st.image(predictions.image,caption='Original Image') - st.write(predictions.results) - else: - st.image(predictions.image_overlay,caption='Image with Bounding Boxes/Keypoints') - model.measure_time=True - predictions=model(image) - stats=model.time_stats() - st.write('Expected Frames per second for the model= ', 1000.0/stats["CoreInferenceDuration_ms"].avg) \ No newline at end of file diff --git a/spaces/derek-thomas/sentence_diagrams/app.py b/spaces/derek-thomas/sentence_diagrams/app.py deleted file mode 100644 index 66015c3a6307e11b887c726143a26f8b569096e9..0000000000000000000000000000000000000000 --- a/spaces/derek-thomas/sentence_diagrams/app.py +++ /dev/null @@ -1,91 +0,0 @@ -import gradio as gr - -from pathlib import Path -from supar import Parser -from spacy import displacy -from spacy.tokens import Doc, Span -import spacy - -proj_dir = Path(__file__).parent -model_choices = sorted([str(model.name) for model in (proj_dir / 'models').glob('*')]) - - -def sentence_diagram(model_name, text, progress=gr.Progress(track_tqdm=True)): - parser = Parser.load(f'./models/{model_name}') - - Span.set_extension("con_tree", getter=lambda x: parser.predict([i.text for i in x], verbose=False)[0], force=True) - nlp = spacy.load('en_core_web_sm') - doc = nlp(text) - - svg = displacy.render(doc, style="dep") - output_path = Path("sentence.svg") - output_path.open("w", encoding="utf-8").write(svg) - return output_path - - -with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown(""" - # Purpose - Way back in 7th grade, my english teacher **Brother Hill** would always disclaim our sentence diagram lessons with: - "*you probably wont be doing these in 20 years*". A few of us being middle schoolers would love to contradict this. - Unfortunately he passed away in 2015, so I thought this would be a nice tribute. - - # Instructions - 1. Choose a model: - - `ptb.biaffine.dep.roberta` is slower but marginally better - - `ptb.biaffine.dep.lstm.char` is faster but marginally worse - 2. Write your sentence - 3. Click Run! - """) - - gr.HTML("") # work - - model_name = gr.Dropdown(choices=model_choices, label='Model', value=model_choices[0]) - text_in = gr.Textbox(label='Sentence(s) to diagram', - value='You were a great teacher, and Im thankful for the impact you had in my life!') - button = gr.Button('Run!') - html_out = gr.Image() - gr.Markdown(""" - # Information - ##### This doesnt look like the sentences we used to do! - - There are some slight differences between - [Reed-Kellogg](https://blog.ung.edu/press/classroom-grammar-an-introduction-to-the-reed-kellogg-system/) - and [Dependency Parsing](https://en.wikipedia.org/wiki/Dependency_grammar) - in both presentation and linquistic analysis as shown [here](https://en.wikipedia.org/wiki/Sentence_diagram), - but they are similar enough for me not to mind too much. - - ##### How did you do this? - - I chose a state of the art **Dependency Parsing** [model](https://github.com/yzhangcs/parser) as of ~2 years ago. - I believe this has been [surpassed](https://paperswithcode.com/sota/dependency-parsing-on-penn-treebank) - in recent years. - - Dependency Parsing was a popular task in NLP to feed to models to improve performance, but in the age of the - [transformer](https://arxiv.org/abs/1706.03762) it's rarelu used in anymore. - - Then I deployed this in a [Gradio App](https://gradio.app) on a [Hugging Face Space](https://huggingface.co/spaces). - - # To Brother Hlll - Thanks for being a great teacher. As an adult I appreciate even more that you invested in so many of us, - yet you didnt get to witness a lot of the results. - - > One generation plants the trees, and another gets the shade. - > - > ~ [Chinese Proverb](https://rotarycluboflahainasunset.org/stories/one-generation-plants-the-trees-and-another-gets-the-shade-(chinese-proverb)) - - I have a lot of fond memories of you from PE, English, and Home Repair, and I wish we - could have connected before you passed away. - - Thanks again, - - Derek - """) - - button.click(sentence_diagram, - inputs=[model_name, text_in], - outputs=html_out) - -if __name__ == '__main__': - demo.queue().launch(show_error=True) diff --git a/spaces/diacanFperku/AutoGPT/Cheat Engine Rance 6.md b/spaces/diacanFperku/AutoGPT/Cheat Engine Rance 6.md deleted file mode 100644 index 43e1196c3e811318729edf29cd1a28358fdf20ff..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Cheat Engine Rance 6.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    If the story isnt complicated enough to be confusing, the story of Rance and his Hyper Weapons is also pretty simple: go out and rescue as many women as you can (including the DLC that wasnt in the original Rance 3, if your heart and wallet allows it) and fuck them as much as you can. Yes, thats pretty much it and its hard to find any problems with the concept behind Rance 6. Even if you choose to go through the game without the Hyper Weapons, you still get a Hyper Level for any mission you complete.

    -

    Rance VII -The Wolf World- is the tenth installment of the Rance Series.
    Showing its own set of deviations from the previous games, Rance VII is a much more absurd and ridiculous story featuring Rance doing whatever the hell he feels like and his innocent cute pets submissively obeying him, behaving so poorly and clumsily when hes not wearing his mask of being a huge threat in front of the whole world, often only showing up when its the last minute to apologize to a girl he just had sex with (usually on the verge of a meltdown due to the pain of being a sex slave the whole time). There were even times when he decided to completely ignore them for the whole duration of the game, since the whole point of his existence in the first place is to give the women pleasure and relieving a slave of his sexual needs for too long can ruin it for him. During the whole story, he also encounters new, never seen before characters like a guy from a game called Funny Monster and a sadistic granddad who just happens to have a weakness for fat chicks.

    -

    Cheat Engine Rance 6


    Download Ziphttps://gohhs.com/2uFVu4



    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Metallica 38th Anniversary 1981 2019 Shirt.md b/spaces/diacanFperku/AutoGPT/Metallica 38th Anniversary 1981 2019 Shirt.md deleted file mode 100644 index 7b4ef0cfb5de6e2062884ccc6e1706178428dba7..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Metallica 38th Anniversary 1981 2019 Shirt.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Metallica 38th anniversary 1981 2019 shirt


    Downloadhttps://gohhs.com/2uFUqd



    -
    -Buy here: Metallica 38th anniversary 1981-2019 shirt Source: Metallica 38th anniversary 1981-2019 shirt Judas Priest is a great metal band but ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Planogram 3d Torrent FULL Version Downloadl.md b/spaces/diacanFperku/AutoGPT/Planogram 3d Torrent FULL Version Downloadl.md deleted file mode 100644 index 292d6b42d9ad3900331be0bc2c65d88007e46c44..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Planogram 3d Torrent FULL Version Downloadl.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Planogram 3d Torrent FULL Version Downloadl


    Downloadhttps://gohhs.com/2uFVnu



    -
    -Design and create planograms and store plans with Scorpion planogram software for Mac. Intuitive software ... easy and smart. 3D Fruit Planogram Example ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/text/chinese.py b/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/models.py b/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/models.py deleted file mode 100644 index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/models.py +++ /dev/null @@ -1,707 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages -class DurationDiscriminator(nn.Module): #vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, g=None): - x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer = 4, - n_layers_trans_flow = 3, - flow_share_parameter = False, - use_transformer_flow = True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_) - - def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None): - #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/digitalxingtong/Taffy-Bert-VITS2/text/chinese.py b/spaces/digitalxingtong/Taffy-Bert-VITS2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Taffy-Bert-VITS2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/dilums/sentence-similarity/components/ui/slider.tsx b/spaces/dilums/sentence-similarity/components/ui/slider.tsx deleted file mode 100644 index ab19d576f0013b65004f5f5f39e0bfad584a08fa..0000000000000000000000000000000000000000 --- a/spaces/dilums/sentence-similarity/components/ui/slider.tsx +++ /dev/null @@ -1,28 +0,0 @@ -"use client" - -import * as React from "react" -import * as SliderPrimitive from "@radix-ui/react-slider" - -import { cn } from "@/lib/utils" - -const Slider = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - - - -)) -Slider.displayName = SliderPrimitive.Root.displayName - -export { Slider } diff --git a/spaces/dinhminh20521597/OCR_DEMO/app_pages/home.py b/spaces/dinhminh20521597/OCR_DEMO/app_pages/home.py deleted file mode 100644 index ba025f10ec1c89895d4be5bf99704fedcde53182..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/app_pages/home.py +++ /dev/null @@ -1,19 +0,0 @@ -import streamlit as st - -def app(): - st.image('ocr.png') - - st.write("") - - st.markdown('''#### OCR, or Optical Character Recognition, is a computer vision task, \ - which includes the detection of text areas, and the recognition of characters.''') - st.write("") - st.write("") - - st.markdown("##### This app allows you to compare, from a given image, the results of different solutions:") - st.markdown("##### *EasyOcr, PaddleOCR, MMOCR, Tesseract*") - st.write("") - st.write("") - st.markdown("👈 Select the **About** page from the sidebar for information on how the app works") - - st.markdown("👈 or directly select the **App** page") \ No newline at end of file diff --git a/spaces/dmeck/RVC-Speakers/rvc/infer_pack/__init__.py b/spaces/dmeck/RVC-Speakers/rvc/infer_pack/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/dorkai/SINGPT-Temporary/extensions/elevenlabs_tts/script.py b/spaces/dorkai/SINGPT-Temporary/extensions/elevenlabs_tts/script.py deleted file mode 100644 index 90d61efc6aa77bc2377c435eefe4cf623b588168..0000000000000000000000000000000000000000 --- a/spaces/dorkai/SINGPT-Temporary/extensions/elevenlabs_tts/script.py +++ /dev/null @@ -1,113 +0,0 @@ -from pathlib import Path - -import gradio as gr -from elevenlabslib import * -from elevenlabslib.helpers import * - -params = { - 'activate': True, - 'api_key': '12345', - 'selected_voice': 'None', -} - -initial_voice = ['None'] -wav_idx = 0 -user = ElevenLabsUser(params['api_key']) -user_info = None - - -# Check if the API is valid and refresh the UI accordingly. -def check_valid_api(): - - global user, user_info, params - - user = ElevenLabsUser(params['api_key']) - user_info = user._get_subscription_data() - print('checking api') - if params['activate'] == False: - return gr.update(value='Disconnected') - elif user_info is None: - print('Incorrect API Key') - return gr.update(value='Disconnected') - else: - print('Got an API Key!') - return gr.update(value='Connected') - -# Once the API is verified, get the available voices and update the dropdown list -def refresh_voices(): - - global user, user_info - - your_voices = [None] - if user_info is not None: - for voice in user.get_available_voices(): - your_voices.append(voice.initialName) - return gr.Dropdown.update(choices=your_voices) - else: - return - -def remove_surrounded_chars(string): - new_string = "" - in_star = False - for char in string: - if char == '*': - in_star = not in_star - elif not in_star: - new_string += char - return new_string - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - return string - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - global params, wav_idx, user, user_info - - if params['activate'] == False: - return string - elif user_info == None: - return string - - string = remove_surrounded_chars(string) - string = string.replace('"', '') - string = string.replace('“', '') - string = string.replace('\n', ' ') - string = string.strip() - - if string == '': - string = 'empty reply, try regenerating' - - output_file = Path(f'extensions/elevenlabs_tts/outputs/{wav_idx:06d}.wav'.format(wav_idx)) - voice = user.get_voices_by_name(params['selected_voice'])[0] - audio_data = voice.generate_audio_bytes(string) - save_bytes_to_path(Path(f'extensions/elevenlabs_tts/outputs/{wav_idx:06d}.wav'), audio_data) - - string = f'' - wav_idx += 1 - return string - -def ui(): - - # Gradio elements - with gr.Row(): - activate = gr.Checkbox(value=params['activate'], label='Activate TTS') - connection_status = gr.Textbox(value='Disconnected', label='Connection Status') - voice = gr.Dropdown(value=params['selected_voice'], choices=initial_voice, label='TTS Voice') - with gr.Row(): - api_key = gr.Textbox(placeholder="Enter your API key.", label='API Key') - connect = gr.Button(value='Connect') - - # Event functions to update the parameters in the backend - activate.change(lambda x: params.update({'activate': x}), activate, None) - voice.change(lambda x: params.update({'selected_voice': x}), voice, None) - api_key.change(lambda x: params.update({'api_key': x}), api_key, None) - connect.click(check_valid_api, [], connection_status) - connect.click(refresh_voices, [], voice) diff --git a/spaces/ejschwartz/function-method-detector/app.py b/spaces/ejschwartz/function-method-detector/app.py deleted file mode 100644 index 72388b85691df47ea095bcf800ebeec462810397..0000000000000000000000000000000000000000 --- a/spaces/ejschwartz/function-method-detector/app.py +++ /dev/null @@ -1,158 +0,0 @@ -import gradio as gr -import shap -import transformers - -import os -import re -import subprocess -import sys -import tempfile - -model = gr.load("ejschwartz/oo-method-test-model-bylibrary", src="models") - -model_interp = transformers.pipeline("text-classification", "ejschwartz/oo-method-test-model-bylibrary") - -def get_all_dis(bname, addrs=None): - - anafile = tempfile.NamedTemporaryFile(prefix=os.path.basename(bname) + "_", suffix=".bat_ana") - ananame = anafile.name - - addrstr = "" - if addrs is not None: - addrstr = " ".join([f"--function-at {x}" for x in addrs]) - - subprocess.check_output(f"bat-ana {addrstr} --no-post-analysis -o {ananame} {bname} 2>/dev/null", shell=True) - - - output = subprocess.check_output(f"bat-dis --no-insn-address --no-bb-cfg-arrows --color=off {ananame} 2>/dev/null", shell=True) - output = re.sub(b' +', b' ', output) - - func_dis = {} - last_func = None - current_output = [] - - for l in output.splitlines(): - if l.startswith(b";;; function 0x"): - if last_func is not None: - func_dis[last_func] = b"\n".join(current_output) - last_func = int(l.split()[2], 16) - current_output.clear() - - if not b";;" in l: - current_output.append(l) - - if last_func is not None: - if last_func in func_dis: - print("Warning: Ignoring multiple functions at the same address") - else: - func_dis[last_func] = b"\n".join(current_output) - - return func_dis - -def get_funs(f): - funs = get_all_dis(f.name) - return "\n".join(("%#x" % addr) for addr in funs.keys()) - -with gr.Blocks() as demo: - - all_dis_state = gr.State() - - gr.Markdown( - """ - # Function/Method Detector - - First, upload a binary. - - This model was only trained on 32-bit MSVC++ binaries. You can provide - other types of binaries, but the result will probably be gibberish. - """ - ) - - file_widget = gr.File(label="Binary file") - - with gr.Column(visible=False) as col: - #output = gr.Textbox("Output") - - gr.Markdown(""" - Great, you selected an executable! Now pick the function you would like to analyze. - """) - - fun_dropdown = gr.Dropdown(label="Select a function", choices=["Woohoo!"], interactive=True) - - gr.Markdown(""" - Below you can find the selected function's disassembly, and the model's - prediction of whether the function is an object-oriented method or a - regular function. - """) - - with gr.Row(visible=True) as result: - disassembly = gr.Textbox(label="Disassembly", lines=20) - with gr.Column(): - clazz = gr.Label() - interpret_button = gr.Button("Interpret (very slow)") - interpretation = gr.components.Interpretation(disassembly) - - example_widget = gr.Examples( - examples=[f.path for f in os.scandir(os.path.join(os.path.dirname(__file__), "examples"))], - inputs=file_widget, - outputs=[all_dis_state, disassembly, clazz] - ) - - def file_change_fn(file, progress=gr.Progress()): - - if file is None: - return {col: gr.update(visible=False), - all_dis_state: None} - else: - - #fun_data = {42: 2, 43: 3} - progress(0, desc="Disassembling executable") - fun_data = get_all_dis(file.name) - - addrs = ["%#x" % addr for addr in fun_data.keys()] - - return {col: gr.update(visible=True), - fun_dropdown: gr.Dropdown.update(choices=addrs, value=addrs[0]), - all_dis_state: fun_data - } - - def function_change_fn(selected_fun, fun_data): - - disassembly_str = fun_data[int(selected_fun, 16)].decode("utf-8") - load_results = model.fn(disassembly_str) - top_k = {e['label']: e['confidence'] for e in load_results['confidences']} - - return {disassembly: gr.Textbox.update(value=disassembly_str), - clazz: gr.Label.update(top_k), - # I can't figure out how to hide this - #interpretation: {} - } - - # XXX: Ideally we'd use the gr.load model, which uses the huggingface - # inference API. But shap library appears to use information in the - # transformers pipeline, and I don't feel like figuring out how to - # reimplement that, so we'll just use a regular transformers pipeline here - # for interpretation. - def interpretation_function(text, progress=gr.Progress(track_tqdm=True)): - - progress(0, desc="Interpreting function") - explainer = shap.Explainer(model_interp) - shap_values = explainer([text]) - - # Dimensions are (batch size, text size, number of classes) - # Since we care about positive sentiment, use index 1 - scores = list(zip(shap_values.data[0], shap_values.values[0, :, 1])) - # Scores contains (word, score) pairs - - - # Format expected by gr.components.Interpretation - return {"original": text, "interpretation": scores} - - file_widget.change(file_change_fn, file_widget, [col, fun_dropdown, all_dis_state]) - - fun_dropdown.change(function_change_fn, [fun_dropdown, all_dis_state], [disassembly, clazz, interpretation]) - - interpret_button.click(interpretation_function, disassembly, interpretation) - -demo.queue() -demo.launch(server_name="0.0.0.0", server_port=7860, share=True) diff --git "a/spaces/f2api/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" "b/spaces/f2api/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" deleted file mode 100644 index 8d3f97b5b3e13386c50ff463133b92aa570804c2..0000000000000000000000000000000000000000 --- "a/spaces/f2api/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" +++ /dev/null @@ -1,240 +0,0 @@ -from toolbox import update_ui, trimmed_format_exc -from toolbox import CatchException, report_execption, write_results_to_file, zip_folder - - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex") - - print('Segmentation: done') - def merge_result(self): - self.file_result = ["" for _ in range(len(self.file_paths))] - for r, k in zip(self.sp_file_result, self.sp_file_index): - self.file_result[k] += r - - def write_result(self): - manifest = [] - for path, res in zip(self.file_paths, self.file_result): - with open(path + '.polish.tex', 'w', encoding='utf8') as f: - manifest.append(path + '.polish.tex') - f.write(res) - return manifest - - def zip_result(self): - import os, time - folder = os.path.dirname(self.file_paths[0]) - t = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) - zip_folder(folder, './gpt_log/', f'{t}-polished.zip') - - -def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='polish'): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - - # <-------- 读取Latex文件,删除其中的所有注释 ----------> - pfg = PaperFileGroup() - - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - # 定义注释的正则表达式 - comment_pattern = r'(? - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - - # <-------- 多线程润色开始 ----------> - if language == 'en': - if mode == 'polish': - inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, " + - "improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - else: - inputs_array = [r"Below is a section from an academic paper, proofread this section." + - r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + - r"Answer me only with the revised text:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)] - elif language == 'zh': - if mode == 'polish': - inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - else: - inputs_array = [f"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag] - sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)] - - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # 并行任务数量限制,最多同时执行5个,其他的排队等待 - scroller_max_len = 80 - ) - - # <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ----------> - try: - pfg.sp_file_result = [] - for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]): - pfg.sp_file_result.append(gpt_say) - pfg.merge_result() - pfg.write_result() - pfg.zip_result() - except: - print(trimmed_format_exc()) - - # <-------- 整理结果,退出 ----------> - create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" - res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name) - history = gpt_response_collection - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en') - - - - - - -@CatchException -def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh') - - - - -@CatchException -def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行纠错。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='proofread') diff --git a/spaces/facebook/ov-seg/open_vocab_seg/data/augmentations.py b/spaces/facebook/ov-seg/open_vocab_seg/data/augmentations.py deleted file mode 100644 index 44e4906d4827812fa707f50e703f253a64ab6e43..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/open_vocab_seg/data/augmentations.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import math -import numbers -import numpy as np -from detectron2.data.transforms.augmentation import Augmentation -from detectron2.data.transforms.transform import ( - CropTransform, - ResizeTransform, - TransformList, -) -from PIL import Image -from fvcore.transforms.transform import PadTransform - - -def mask2box(mask: np.ndarray): - # use naive way - row = np.nonzero(mask.sum(axis=0))[0] - if len(row) == 0: - return None - x1 = row.min() - x2 = row.max() - col = np.nonzero(mask.sum(axis=1))[0] - y1 = col.min() - y2 = col.max() - return x1, y1, x2 + 1 - x1, y2 + 1 - y1 - - -def expand_box(x, y, w, h, expand_ratio=1.0, max_h=None, max_w=None): - cx = x + 0.5 * w - cy = y + 0.5 * h - w = w * expand_ratio - h = h * expand_ratio - box = [cx - 0.5 * w, cy - 0.5 * h, cx + 0.5 * w, cy + 0.5 * h] - if max_h is not None: - box[1] = max(0, box[1]) - box[3] = min(max_h - 1, box[3]) - if max_w is not None: - box[0] = max(0, box[0]) - box[2] = min(max_w - 1, box[2]) - box[2] = box[2] - box[0] - box[3] = box[3] - box[1] - - return [int(b) for b in box] - - -class CropImageWithMask(Augmentation): - def __init__(self, expand_ratio=1.0, mode="choice"): - if isinstance(expand_ratio, numbers.Number): - expand_ratio = (expand_ratio, expand_ratio) - self.mode = mode - self.expand_ratio = expand_ratio - if self.mode == "range": - assert len(expand_ratio) == 2 and expand_ratio[0] < expand_ratio[1] - - def get_transform(self, image, sem_seg, category_id): - input_size = image.shape[:2] - bin_mask = sem_seg == category_id - x, y, w, h = mask2box(bin_mask) - if self.mode == "choice": - expand_ratio = np.random.choice(self.expand_ratio) - else: - expand_ratio = np.random.uniform(self.expand_ratio[0], self.expand_ratio[1]) - x, y, w, h = expand_box(x, y, w, h, expand_ratio, *input_size) - w = max(w, 1) - h = max(h, 1) - return CropTransform(x, y, w, h, input_size[1], input_size[0]) - - -class CropImageWithBox(Augmentation): - def __init__(self, expand_ratio=1.0, mode="choice"): - if isinstance(expand_ratio, numbers.Number): - expand_ratio = (expand_ratio, expand_ratio) - self.mode = mode - self.expand_ratio = expand_ratio - if self.mode == "range": - assert len(expand_ratio) == 2 and expand_ratio[0] < expand_ratio[1] - - def get_transform(self, image, boxes): - input_size = image.shape[:2] - x, y, x2, y2 = boxes[0] - w = x2 - x + 1 - h = y2 - y + 1 - if self.mode == "choice": - expand_ratio = np.random.choice(self.expand_ratio) - else: - expand_ratio = np.random.uniform(self.expand_ratio[0], self.expand_ratio[1]) - x, y, w, h = expand_box(x, y, w, h, expand_ratio, *input_size) - w = max(w, 1) - h = max(h, 1) - return CropTransform(x, y, w, h, input_size[1], input_size[0]) - - -class RandomResizedCrop(Augmentation): - def __init__( - self, - size, - scale=(0.08, 1.0), - ratio=(3.0 / 4.0, 4.0 / 3.0), - interpolation=Image.BILINEAR, - ): - if isinstance(size, int): - size = (size, size) - else: - assert isinstance(size, (tuple, list)) and len(size) == 2 - - self.size = size - - self.scale = scale - self.ratio = ratio - self.interpolation = interpolation - - def get_transform(self, image): - height, width = image.shape[:2] - area = height * width - - log_ratio = np.log(np.array(self.ratio)) - is_success = False - for _ in range(10): - target_area = area * np.random.uniform(self.scale[0], self.scale[1]) - aspect_ratio = np.exp(np.random.uniform(log_ratio[0], log_ratio[1])) - - w = int(round(math.sqrt(target_area * aspect_ratio))) - h = int(round(math.sqrt(target_area / aspect_ratio))) - - if 0 < w <= width and 0 < h <= height: - i = np.random.randint(0, width - w + 1) - j = np.random.randint(0, height - h + 1) - - is_success = True - break - - if not is_success: - # Fallback to central crop - in_ratio = float(width) / float(height) - if in_ratio < min(self.ratio): - w = width - h = int(round(w / min(self.ratio))) - elif in_ratio > max(self.ratio): - h = height - w = int(round(h * max(self.ratio))) - else: # whole image - w = width - h = height - i = (width - w) // 2 - j = (height - h) // 2 - return TransformList( - [ - CropTransform(i, j, w, h, width, height), - ResizeTransform( - h, w, self.size[1], self.size[0], interp=self.interpolation - ), - ] - ) - - -class CenterCrop(Augmentation): - def __init__(self, size, seg_ignore_label): - if isinstance(size, numbers.Number): - size = (int(size), int(size)) - elif isinstance(size, (tuple, list)) and len(size) == 1: - size = (size[0], size[0]) - self.size = size - self.seg_ignore_label = seg_ignore_label - - def get_transform(self, image): - - image_height, image_width = image.shape[:2] - crop_height, crop_width = self.size - - transforms = [] - if crop_width > image_width or crop_height > image_height: - padding_ltrb = [ - (crop_width - image_width) // 2 if crop_width > image_width else 0, - (crop_height - image_height) // 2 if crop_height > image_height else 0, - (crop_width - image_width + 1) // 2 if crop_width > image_width else 0, - (crop_height - image_height + 1) // 2 - if crop_height > image_height - else 0, - ] - transforms.append( - PadTransform( - *padding_ltrb, - orig_w=image_width, - orig_h=image_height, - seg_pad_value=self.seg_ignore_label - ) - ) - image_width, image_height = ( - image_width + padding_ltrb[0] + padding_ltrb[2], - image_height + padding_ltrb[1] + padding_ltrb[3], - ) - - crop_top = int(round((image_height - crop_height) / 2.0)) - crop_left = int(round((image_width - crop_width) / 2.0)) - transforms.append( - CropTransform( - crop_left, crop_top, crop_width, crop_height, image_width, image_height - ) - ) - return TransformList(transforms) diff --git a/spaces/falterWliame/Face_Mask_Detection/Aurora Scientific Calculator Sc 500 Zip.md b/spaces/falterWliame/Face_Mask_Detection/Aurora Scientific Calculator Sc 500 Zip.md deleted file mode 100644 index 32467a92184c9e200034b4069ddc61d487f9f542..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Aurora Scientific Calculator Sc 500 Zip.md +++ /dev/null @@ -1,49 +0,0 @@ - -

    Aurora Scientific Calculator SC 500 Zip: A Review

    -

    If you are looking for a reliable and versatile scientific calculator, you might want to consider the Aurora Scientific Calculator SC 500 Zip. This calculator has a sleek design, a large LCD display, and a protective zip case that makes it easy to carry around. It also has many features and functions that can help you with various calculations, such as trigonometry, statistics, fractions, complex numbers, and more.

    -

    aurora scientific calculator sc 500 zip


    Download File ►►►►► https://urlca.com/2uDdI9



    -

    In this article, we will review the Aurora Scientific Calculator SC 500 Zip and highlight its pros and cons. We will also compare it with some of its competitors and give you some tips on how to use it effectively.

    - -

    Features and Functions of the Aurora Scientific Calculator SC 500 Zip

    -

    The Aurora Scientific Calculator SC 500 Zip has 254 functions, including:

    -
      -
    • Basic arithmetic operations
    • -
    • Parentheses and memory functions
    • -
    • Scientific notation and engineering notation
    • -
    • Trigonometric functions (degrees, radians, and grads)
    • -
    • Inverse trigonometric functions
    • -
    • Hyperbolic functions and inverse hyperbolic functions
    • -
    • Exponential and logarithmic functions
    • -
    • Power and root functions
    • -
    • Factorial and permutation functions
    • -
    • Combination and binomial coefficient functions
    • -
    • Fraction calculations and conversions
    • -
    • Mixed number calculations and conversions
    • -
    • Decimal calculations and conversions
    • -
    • Complex number calculations (rectangular and polar forms)
    • -
    • Linear equation solver (up to 3 variables)
    • -
    • Quadratic equation solver
    • -
    • Cubic equation solver
    • -
    • Nth degree polynomial equation solver (up to 6 coefficients)
    • -
    • Simultaneous equation solver (up to 3 equations)
    • -
    • Matrix calculations (up to 3x3 matrices)
    • -
    • Determinant, inverse, transpose, and trace of matrices
    • -
    • Vector calculations (up to 3 dimensions)
    • -
    • Dot product and cross product of vectors
    • -
    • Statistics calculations (1-variable and 2-variable)
    • -
    • Data input and editing (up to 40 data pairs)
    • -
    • Mean, standard deviation, variance, sum, product, minimum, maximum, median, quartiles, percentiles, and outliers of data
    • -
    • Linear regression, quadratic regression, cubic regression, exponential regression, logarithmic regression, power regression, inverse regression, sinusoidal regression, logistic regression, and polynomial regression of data
    • -
    • Coefficient of determination (R-squared) of data
    • -
    • Coefficient of correlation (r) of data
    • -
    • Prediction of y-value or x-value from regression equation
    • -
    • Normal distribution calculations (probability density function and cumulative distribution function)
    • -
    • Inverse normal distribution calculations (z-score from probability or probability from z-score)
    • -
    • Bivariate normal distribution calculations (joint probability density function and joint cumulative distribution function)
    • -
    • T-distribution calculations (probability density function and cumulative distribution function)
    • -
    • Inverse t-distribution calculations (t-value from probability or probability from t-value)
    • -
    • F-distribution calculations (probability density function and cumulative distribution function)
    • -
    • Inverse F-distribution calculations (F-value from probability or probability from F-value)
    • -

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/CCleaner Professional 5.42.6499 - SeuPirate Utorrent.md b/spaces/falterWliame/Face_Mask_Detection/CCleaner Professional 5.42.6499 - SeuPirate Utorrent.md deleted file mode 100644 index 41a07a78356077d2ef9966c653baf3ec5f3d2623..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/CCleaner Professional 5.42.6499 - SeuPirate Utorrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

      CCleaner Professional 5.42.6499 - SeuPirate utorrent


      Download Zip 🗸 https://urlca.com/2uDckA



      - - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/Encase Forensic V7 Download Full Version.md b/spaces/falterWliame/Face_Mask_Detection/Encase Forensic V7 Download Full Version.md deleted file mode 100644 index 3f516c806ac23e283fd786730cb8b09d7ddc74b7..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Encase Forensic V7 Download Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

      encase forensic v7 download full version


      Download Filehttps://urlca.com/2uDbTs



      -
      -Download Forensic Imager is a Windows based program that will acquire, ... Programs for query ″encase forensic v7 download″ Memory forensics tools are ... available for conducting Forensic Analysis in the Windows Operating See full list on ... 11 Sep 2019 If you are using the standalone Windows executable version of ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Download MiniStrike APK for Android - Uptodown.md b/spaces/fatiXbelha/sd/Download MiniStrike APK for Android - Uptodown.md deleted file mode 100644 index f8856fc1edc9c490bebb9c37923f0857cfff251f..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download MiniStrike APK for Android - Uptodown.md +++ /dev/null @@ -1,89 +0,0 @@ - -

      Download MiniStrike Uptodown: A Fun and Fast-Paced Shooter Game for Android

      -

      If you are looking for a shooter game that is fun, fast-paced, and easy to play on your Android device, you should check out MiniStrike. This game is a tribute to the classic counter-strike, with a big concern on gameplay. You can play with your friends online or offline, in team vs team mode, with up to 10 players simultaneously. You can also choose from different weapons, maps, and modes, and customize your controls to suit your preferences. In this article, we will tell you more about what MiniStrike is, why you should download it from Uptodown, and how to do it.

      -

      download ministrike uptodown


      Downloadhttps://urllie.com/2uNITk



      -

      What is MiniStrike?

      -

      MiniStrike is a funny "multiplayer third person shooter" that was developed by Malo The Toad. It was released in 2020 and has since gained over 1 million downloads and 4.3 stars rating on Google Play Store. Here are some of the features that make MiniStrike a great game:

      -

      A tribute to counter-strike

      -

      MiniStrike is inspired by the popular counter-strike game, which is one of the most played and influential shooter games of all time. You can see the resemblance in the graphics, sounds, and gameplay of MiniStrike. The game also has similar weapons, such as pistols, rifles, shotguns, and grenades, with different properties like damage over distance, moving accuracy, recoil, etc. You can also buy weapons and equipment at the beginning of each round.

      -

      A multiplayer third person shooter

      -

      MiniStrike is designed for multiplayer mode, where you can play with your friends or other players from around the world. You can join or create a server online (using 3G or Wifi) or offline (using Wifi or Hotspot). The game supports up to 10 players simultaneously (5 vs 5), in team vs team mode. You can also chat with your teammates or opponents using the in-game chat feature.

      -

      A game with intuitive controls and realistic physics

      -

      MiniStrike has intuitive controls that are optimized for mobile devices. You can move, aim, shoot, reload, switch weapons, and throw grenades using simple touch gestures. You can also configure the controls from the option menu, to adjust the sensitivity, position, and size of the buttons. The game also has realistic physics, such as bullet drop, recoil, hit box (headshot!), and blood splatter.

      -

      Why download MiniStrike Uptodown?

      -

      MiniStrike is available on Google Play Store, but you may want to download it from Uptodown instead. Uptodown is a totally open app store that offers many advantages over other app stores. Here are some of them:

      -

      Access to the latest version and updates

      -

      Uptodown provides you with the latest version of MiniStrike as soon as it is released by the developer. You can also get automatic updates as well as notifications when a new version is available. This way, you can enjoy the newest features and improvements of the game without any delay.

      -

      Roll back to any previous version

      -

      Uptodown also allows you to roll back to any previous version of MiniStrike that you want. This is useful if you encounter any problems with the latest version or if you prefer an older version for some reason. You can easily switch between different versions of the game without

      losing your data or settings.

      -

      How to download ministrike game from uptodown
      -Download ministrike apk for android from uptodown
      -Ministrike game review and download link from uptodown
      -Download ministrike mod apk with unlimited coins from uptodown
      -Best alternative apps to ministrike on uptodown
      -Download ministrike for pc using uptodown emulator
      -Ministrike game tips and tricks from uptodown blog
      -Download ministrike latest version from uptodown
      -Ministrike game features and gameplay from uptodown
      -Download ministrike offline installer from uptodown
      -Ministrike game ratings and comments from uptodown users
      -Download ministrike for ios from uptodown
      -Ministrike game cheats and hacks from uptodown
      -Download ministrike beta version from uptodown
      -Ministrike game updates and news from uptodown
      -Download ministrike for windows phone from uptodown
      -Ministrike game comparison with other shooting games on uptodown
      -Download ministrike for mac from uptodown
      -Ministrike game screenshots and videos from uptodown
      -Download ministrike for linux from uptodown
      -Ministrike game system requirements and compatibility from uptodown
      -Download ministrike for chromebook from uptodown
      -Ministrike game FAQs and support from uptodown
      -Download ministrike for firestick from uptodown
      -Ministrike game community and forum from uptodown
      -Download ministrike for smart tv from uptodown
      -Ministrike game awards and achievements from uptodown
      -Download ministrike for roku from uptodown
      -Ministrike game history and development from uptodown
      -Download ministrike for nintendo switch from uptodown
      -Ministrike game genres and modes from uptodown
      -Download ministrike for ps4 from uptodown
      -Ministrike game challenges and missions from uptodown
      -Download ministrike for xbox one from uptodown
      -Ministrike game graphics and sound effects from uptodown
      -Download ministrike for android tv box from uptodown
      -Ministrike game bugs and errors from uptodown
      -Download ministrike for kindle fire from uptodown
      -Ministrike game tutorials and guides from uptodown
      -Download ministrike for vr headset from uptodown

      -

      No regional or country-specific restrictions

      -

      Uptodown does not impose any regional or country-specific restrictions on the apps that it offers. This means that you can download MiniStrike from Uptodown regardless of where you are located or what language you speak. You can also access apps that are not available in your region or country on other app stores.

      -

      No sign-up or subscription required

      -

      Uptodown does not require you to sign up or subscribe to use its services. You can download MiniStrike from Uptodown without creating an account or providing any personal information. You can also download as many apps as you want without any limitations or fees.

      -

      How to download MiniStrike Uptodown?

      -

      Downloading MiniStrike from Uptodown is very easy and fast. You just need to follow these simple steps:

      -

      Visit the official Uptodown website

      -

      The first thing you need to do is to visit the official Uptodown website, which is https://www.uptodown.com. You can access the website from any browser on your Android device or PC.

      -

      Search for MiniStrike in the app store

      -

      The next thing you need to do is to search for MiniStrike in the app store. You can use the search bar at the top of the website, or browse through the categories and subcategories of apps. You can also filter the results by popularity, rating, date, etc. Once you find MiniStrike, click on it to go to its app page.

      -

      Download the APK file and install it on your device

      -

      The last thing you need to do is to download the APK file of MiniStrike and install it on your device. You can see the download button at the bottom of the app page, which will show you the size and version of the APK file. You can also see the previous versions of the app and choose any of them if you want. After you click on the download button, the APK file will be downloaded to your device's storage. You may need to enable the installation of apps from unknown sources in your device's settings before you can install it. Once you install it, you can launch it from your app drawer or home screen.

      -

      Enjoy the game with your friends online or offline

      -

      Now that you have downloaded and installed MiniStrike from Uptodown, you can enjoy the game with your friends online or offline. You can join or create a server, choose a map and a mode, and start shooting. You can also chat with other players, customize your controls, and check your stats and achievements.

      -

      Conclusion

      -

      MiniStrike is a fun and fast-paced shooter game for Android that is inspired by counter-strike. You can play with your friends online or offline, in team vs team mode, with up to 10 players simultaneously. You can also choose from different weapons, maps, and modes, and customize your controls to suit your preferences. If you want to download MiniStrike, we recommend that you do it from Uptodown, which is a totally open app store that offers many advantages over other app stores. You can access the latest version and updates of MiniStrike, roll back to any previous version, download apps without any regional or country-specific restrictions, and download apps without any sign-up or subscription required. To download MiniStrike from Uptodown, you just need to visit the official Uptodown website, search for MiniStrike in the app store, download the APK file and install it on your device, and enjoy the game with your friends online or offline.

      -

      We hope that this article has helped you learn more about MiniStrike and how to download it from Uptodown. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      -

      FAQs

      -
        -
      • Is MiniStrike free?
      • -
      • Yes, MiniStrike is free to download and play. However, it may contain ads and in-app purchases that require real money.
      • -
      • Is MiniStrike safe?
      • -
      • Yes, MiniStrike is safe to download and play. It does not contain any viruses or malware that could harm your device or data. However, you should always download apps from trusted sources like Uptodown.
      • -
      • Is MiniStrike compatible with my device?
      • -
      • MiniStrike is compatible with most Android devices that have Android 4.4 or higher. However, some devices may experience performance issues or crashes due to hardware limitations.
      • How can I contact the developer of MiniStrike?
      • -
      • You can contact the developer of MiniStrike by sending an email to malothetoad@gmail.com. You can also follow the developer on Twitter (@malothetoad) or visit the official website of MiniStrike (https://ministrike.net).
      • -
      • How can I support the development of MiniStrike?
      • -
      • You can support the development of MiniStrike by rating and reviewing the game on Google Play Store or Uptodown, sharing the game with your friends, and making donations or purchases in the game. You can also provide feedback and suggestions to the developer via email or social media.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Roblox APK for Free and Enjoy Millions of Experiences on Your Mobile Device.md b/spaces/fatiXbelha/sd/Download Roblox APK for Free and Enjoy Millions of Experiences on Your Mobile Device.md deleted file mode 100644 index 86c35301f432b48ba5ad9f41454cc84ec2d5814c..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Roblox APK for Free and Enjoy Millions of Experiences on Your Mobile Device.md +++ /dev/null @@ -1,149 +0,0 @@ - -

      How to Download and Play Roblox on Android Devices

      -

      Roblox is one of the most popular online gaming platforms in the world, with over 150 million monthly active users. It is not just a game, but a platform where you can create, share, and play millions of games created by other users. You can also chat with your friends, customize your avatar, and join various communities. In this article, we will show you how to download and install Roblox apk for Android devices, how to play Roblox on your smartphone or tablet, and how to stay safe and secure on Roblox.

      -

      apk download roblox


      Download Zip ··· https://urllie.com/2uNEMh



      -

      What is Roblox and why is it popular?

      -

      Roblox is a global platform that allows you to create, share, and play immersive 3D experiences with friends and millions of other people. You can choose from a variety of genres, such as adventure, role-playing, simulation, racing, puzzle, and more. You can also use the Roblox Studio tool to create your own games using the Lua programming language. Roblox is free to play, but you can also buy Robux, the virtual currency, to purchase premium items and access exclusive features.

      -

      Roblox is popular because it offers unlimited creativity and fun for users of all ages. You can express yourself, learn new skills, socialize with others, and explore an infinite variety of worlds. Roblox also has a vibrant community of developers, educators, influencers, and fans who support each other and collaborate on projects. Roblox has been praised for its educational benefits, such as fostering creativity, problem-solving, collaboration, coding, and entrepreneurship.

      -

      How to download and install Roblox apk for Android devices

      -

      If you want to play Roblox on your Android device, you need to download and install the Roblox apk file. This is a file that contains all the data and instructions for running the app on your device. Here are the steps to follow:

      -
        -
      1. Go to the Roblox website using any browser on your device.
      2. -
      3. Tap on the Download button at the top right corner of the screen.
      4. -
      5. You will be redirected to the Google Play Store, where you can tap on Install to download and install the app.
      6. -
      7. If you cannot access the Google Play Store or prefer to download the apk file directly from a third-party source, you can go to APKCombo, a website that provides safe and verified apk files for various apps.
      8. -
      9. On APKCombo, search for Roblox in the search bar or use this link.
      10. -
      11. Select the latest version of the app and tap on Download APK.
      12. -
      13. You may need to enable Unknown Sources in your device settings to allow installation from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
      14. -
      15. Once the download is complete, open the apk file from your notification bar or file manager and tap on Install.
      16. -
      17. Wait for the > button at the top left corner of the screen. You can also tap on the Menu button at the top right corner of the screen to access more options, such as reporting, muting, leaving, etc.
      18. -
      -

      How to customize your avatar and chat with friends

      -

      To customize your avatar and chat with friends on Roblox, you can use the Avatar and Chat tabs at the bottom of the screen. Here are the steps to follow:

      -
        -
      1. Tap on the Avatar tab to see your current avatar and its details. You can also see your inventory, outfits, badges, and more.
      2. -
      3. To change your avatar's appearance, tap on any of the categories, such as Hair, Face, Clothing, etc. You can choose from a variety of items that you own or buy new ones with Robux.
      4. -
      5. To change your avatar's animations, tap on the Animations category and select an animation pack that you own or buy a new one with Robux.
      6. -
      7. To save your changes, tap on the Save button at the top right corner of the screen.
      8. -
      9. Tap on the Chat tab to see your recent conversations and messages. You can also see your friends list, requests, groups, and more.
      10. -
      11. To chat with a friend or a group, tap on their name or icon and type your message in the text box at the bottom of the screen. You can also send emojis, stickers, images, and voice messages by tapping on the corresponding icons.
      12. -
      13. To add a new friend or a group, tap on the + button at the top right corner of the screen and search for their username or scan their QR code. You can also accept or decline friend requests by tapping on the Requests tab.
      14. -
      15. To manage your chat settings, such as notifications, privacy, filters, and more, tap on the Gear icon at the top left corner of the screen.
      16. -
      -

      How to stay safe and secure on Roblox

      -

      Roblox is a fun and creative platform, but it also has some risks and challenges that you need to be aware of. Here are some tips and resources to help you stay safe and secure on Roblox:

      -

      Roblox's privacy and cookie policy

      -

      Roblox's privacy and cookie policy explains how Roblox collects, uses, and protects your personal information and data. It also explains how Roblox uses cookies and other technologies to enhance your experience and provide you with relevant ads and content. You can read the full policy here. Some of the key points are:

      -

      roblox apk download latest version
      -roblox apk download for android
      -roblox apk download free
      -roblox apk download mod menu
      -roblox apk download unlimited robux
      -roblox apk download pc
      -roblox apk download chromebook
      -roblox apk download uptodown
      -roblox apk download hack
      -roblox apk download 2023
      -roblox apk download apkpure
      -roblox apk download no verification
      -roblox apk download old version
      -roblox apk download ios
      -roblox apk download online
      -roblox apk download unblocked
      -roblox apk download windows 10
      -roblox apk download modded
      -roblox apk download android 1
      -roblox apk download laptop
      -roblox apk download without google play
      -roblox apk download for kindle fire
      -roblox apk download for samsung
      -roblox apk download for fire tablet
      -roblox apk download new update
      -roblox apk download for mac
      -roblox apk download with obb file
      -roblox apk download for huawei
      -roblox apk download 2.578.564
      -roblox apk download 2.577.506
      -roblox apk download 2.575.424
      -roblox apk download 2.574.341
      -roblox apk download 2.573.414557
      -roblox apk download 2.572.397816
      -roblox apk download 2.571.391948
      -roblox apk download 2.570.388229
      -roblox apk download 2.569.381237
      -roblox apk download 2.568.379573
      -roblox apk download 2.567.378941
      -roblox apk download 2.566.377960

      -
        -
      • You need to be at least 13 years old to use Roblox. If you are under 13, you need to have parental consent and supervision.
      • -
      • You need to provide some basic information when you create an account, such as your username, password, birthday, gender, email address, phone number, etc. You can also choose to link your account with other services, such as Facebook, Google, Apple, etc.
      • -
      • You can control your privacy settings by going to Settings > Privacy. You can choose who can see your profile, message you, invite you to games, follow you, trade with you, etc. You can also enable or disable two-step verification for extra security.
      • -
      • You can delete your account by contacting Roblox support here. However, this will not delete all your data from Roblox's servers. Some data may be retained for legal or operational purposes.
      • -
      • You can opt out of receiving marketing emails from Roblox by clicking on the unsubscribe link at the bottom of any email. You can also opt out of personalized ads by going to this page.
      • -
      • You can manage your cookies by going to this page. You can choose which types of cookies you want to accept or reject. However, some cookies are essential for Roblox to function properly and cannot be disabled.
      • -
      -

      Roblox's community standards

      -

      Roblox's community standards are a set of rules and guidelines that you need to follow when using Roblox. They are designed to ensure that Roblox is a safe and respectful environment for everyone. You can read the full standards here. Some of the key points are:

      -
        -
      • You need to be respectful and kind to other users. You should not harass, bully, threaten, discriminate, or impersonate anyone.
      • -
      • You need to be responsible and honest. You should not cheat, exploit, scam, or hack anyone or anything.
      • -
      • You need to be appropriate and safe. You should not share or create any content that is violent, sexual, hateful, illegal, or harmful.
      • -
      • You need to follow the rules and terms of service of Roblox and any games or communities you join. You should not violate any laws or regulations.
      • -
      • You need to report any violations or issues that you encounter or witness. You can use the report abuse feature in the app or contact Roblox support here.
      • -
      -

      Roblox's parental controls and reporting features

      -

      Roblox's parental controls and reporting features are tools that you can use to protect yourself and your children from any potential risks or problems on Roblox. They are designed to give you more control and oversight over your or your child's account and activity. Here are some of the features and how to use them:

      -
        -
      • You can enable Account Restrictions for your child's account by going to Settings > Security > Account Restrictions. This will limit your child's access to games and content that are suitable for all ages. It will also prevent your child from chatting with other users or joining private servers.
      • -
      • You can enable PIN Protection for your child's account by going to Settings > Security > Account PIN. This will require you to enter a four-digit PIN every time you want to change your child's settings or information.
      • -
      • You can enable Email Verification for your child's account by going to Settings > Account Info > Email. This will allow you to receive notifications and alerts about your child's account and activity.
      • -
      • You can monitor your child's account and activity by going to Settings > Privacy > My Transactions. This will show you your child's purchase history, game history, friends list, messages, and more.
      • -
      • You can report any inappropriate or abusive content or behavior that you or your child encounter or witness by using the report abuse feature in the app. You can also contact Roblox support here if you have any questions or concerns.
      • -
      -

      Conclusion

      -

      In this article, we have shown you how to download and install Roblox apk for Android devices, how to play Roblox on your smartphone or tablet, and how to stay safe and secure on Roblox. Roblox is a fun and creative platform that allows you to create, share, and play millions of games with friends and millions of other people. It also offers educational benefits, such as fostering creativity, problem-solving, collaboration, coding, and entrepreneurship.

      -

      If you are looking for some alternatives or competitors to Roblox, you can try some of these apps:

      -
        -
      • Minecraft: A sandbox game where you can build, explore, and survive in a blocky world.
      • -
      • Fortnite: A battle royale game where you can fight, build, and loot in a colorful world.
      • -
      • Among Us: A social deduction game where you have to find the impostor among your crewmates.
      • -
      • Gacha Life: A dress-up game where you can create and customize your own anime characters.
      • -
      • Avakin Life: A virtual world game where you can chat, socialize, and role-play with other users.
      • -
      -

      We hope you enjoyed this article and learned something new. If you want to try Roblox for yourself, you can download it from the Google Play Store or APKCombo using the links provided in this article. You can also visit the Roblox website for more information and resources. Have fun and happy gaming!

      -

      FAQs

      -

      Here are some frequently asked questions about Roblox:

      -

      What are the system requirements for Roblox on Android devices?

      -

      The system requirements for Roblox on Android devices are:

      -
        -
      • An Android device running Android 4.4 (KitKat) or higher.
      • -
      • A minimum of 1 GB of RAM.
      • -
      • A stable internet connection (Wi-Fi or cellular).
      • -
      • A Google Play account to download and install the app.
      • -
      -

      How much does Roblox cost to play on Android devices?

      -

      Roblox is free to play on Android devices, but you can also buy Robux, the virtual currency, to purchase premium items and access exclusive features. You can buy Robux with real money using your Google Play account or a credit card. The prices of Robux vary depending on the amount and the region. You can also earn Robux by creating and selling your own games and items, or by joining the Roblox Premium subscription service.

      -

      How can I make money on Roblox?

      -

      You can make money on Roblox by creating and selling your own games and items, or by joining the Roblox Premium subscription service. Here are some ways to do that:

      -
        -
      • You can use the Roblox Studio tool to create your own games using the Lua programming language. You can also use the Developer Marketplace to buy and sell assets, such as models, scripts, sounds, etc. You can monetize your games by selling game passes, developer products, or access fees. You can also enable in-game ads to earn revenue from impressions and clicks.
      • -
      • You can use the Avatar Shop to create and sell your own items, such as clothing, accessories, gear, etc. You can also use the UGC Catalog to upload and sell your own custom meshes and textures. You can set your own prices for your items and earn a percentage of the sales.
      • -
      • You can join the Roblox Premium subscription service to get a monthly stipend of Robux and a 10% bonus when buying Robux. You can also get access to exclusive items and features, such as trading and selling items, creating groups, joining premium payouts, etc.
      • -
      -

      How can I learn to code and create games on Roblox?

      -

      You can learn to code and create games on Roblox by using the Roblox Studio tool and the Lua programming language. Here are some resources to help you get started:

      -
        -
      • You can use the Roblox Developer Hub to access tutorials, guides, articles, videos, and more on how to use Roblox Studio and Lua.
      • -
      • You can use the Roblox Education website to access free courses, lessons, projects, and activities on how to code and create games on Roblox.
      • -
      • You can use the Roblox Wiki to access documentation, reference materials, sample code, and community forums on how to use Roblox Studio and Lua.
      • -
      • You can use the Roblox YouTube Channel to watch videos on how to use Roblox Studio and Lua.
      • -
      -

      How can I contact Roblox support if I have any issues?

      -

      You can contact Roblox support if you have any issues or questions about your account, billing, security, games, items, or anything else related to Roblox. Here are some ways to do that:

      -
        -
      • You can use the Roblox Help Center to access FAQs, articles, tips, and troubleshooting guides on various topics related to Roblox.
      • -
      • You can use the Roblox Contact Form to submit a request or report a problem to Roblox support. You will need to provide your username, email address, device type, issue category, description of the issue, screenshots or videos (if applicable), etc.
      • -
      • You can use the Roblox Twitter Account to send a direct message or tweet to Roblox support. You will need to provide your username and a brief description of the issue.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download and Install JioTV APK 6.0.2 on Android TV - Step by Step Guide.md b/spaces/fatiXbelha/sd/Download and Install JioTV APK 6.0.2 on Android TV - Step by Step Guide.md deleted file mode 100644 index 6672ed746a7ecb6b6cec81ee25370881cc83dedb..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download and Install JioTV APK 6.0.2 on Android TV - Step by Step Guide.md +++ /dev/null @@ -1,115 +0,0 @@ -
      -

      Jio TV 6.0 2 APK Download: How to Watch Live TV on Your Android Device

      -

      Do you want to watch live TV on your Android device without any hassle? If yes, then you should try Jio TV, one of the most popular and reliable apps for streaming live TV channels. In this article, we will tell you everything you need to know about Jio TV, including its features, how to download and install it, how to watch live TV on it, and its pros and cons. So, let's get started!

      -

      What is Jio TV?

      -

      Jio TV is an app that allows you to watch live TV channels on your Android device. It is developed by Reliance Jio, one of the leading telecom operators in India. Jio TV offers more than 600 channels in various languages and genres, such as news, sports, entertainment, movies, music, kids, devotional, etc. You can also watch exclusive content from Jio Cinema, Jio Sports, and Jio Originals on Jio TV.

      -

      jio tv 6.0 2 apk download


      Download Filehttps://urllie.com/2uNDWY



      -

      Features of Jio TV

      -

      Jio TV has many features that make it a great app for watching live TV on your Android device. Some of these features are:

      -
        -
      • You can watch live TV anytime, anywhere, as long as you have a Jio SIM card and an internet connection.
      • -
      • You can pause and rewind live TV up to 30 minutes.
      • -
      • You can record and download your favorite shows and watch them offline later.
      • -
      • You can set reminders for upcoming shows and get notifications when they start.
      • -
      • You can mark your favorite channels and genres for easy access.
      • -
      • You can browse and search for channels by name, number, language, or category.
      • -
      • You can switch between SD and HD quality according to your preference and network speed.
      • -
      • You can use the mini player mode to watch live TV while using other apps on your device.
      • -
      • You can use the picture-in-picture mode to watch live TV in a small window on your device.
      • -
      • You can use the lock screen mode to prevent accidental touches while watching live TV.
      • -
      -

      How to download and install Jio TV 6.0 2 APK

      -

      If you want to download and install Jio TV 6.0 2 APK on your Android device, you need to follow these steps:

      -
        -
      1. Go to the official website of Jio TV or click on this link: [JioTV APK for Android - Download](^1^).
      2. -
      3. Download the latest version of Jio TV 6.0 2 APK file on your device.
      4. -
      5. Go to the settings of your device and enable the option of "Unknown sources" under security or privacy settings.
      6. -
      7. Locate the downloaded APK file on your device and tap on it to install it.
      8. -
      9. Wait for the installation process to complete and then launch the app.
      10. -
      11. Login with your Jio number and OTP (one-time password) or scan the QR code with another device that has Jio SIM card.
      12. -
      13. Enjoy watching live TV on your Android device with Jio TV!
      14. -
      -

      How to watch live TV on Jio TVHow to watch live TV on Jio TV

      -

      Once you have downloaded and installed Jio TV 6.0 2 APK on your Android device, you can start watching live TV on it. Here are some tips on how to use the app and enjoy its features:

      -

      How to browse and search for channels

      -

      Jio TV has a simple and user-friendly interface that allows you to browse and search for channels easily. You can swipe left or right on the home screen to see the different categories of channels, such as entertainment, sports, news, movies, etc. You can also tap on the menu icon on the top left corner to see the list of languages and genres available. You can select your preferred language and genre to filter the channels accordingly.

      -

      If you want to search for a specific channel by name or number, you can use the search icon on the top right corner. You can type in the name or number of the channel and see the results. You can also use voice search by tapping on the microphone icon and speaking the name or number of the channel.

      -

      How to pause and rewind live TV

      -

      One of the best features of Jio TV is that it allows you to pause and rewind live TV up to 30 minutes. This means that you can catch up with any missed moments or replay any interesting scenes without missing anything. To pause live TV, you just need to tap on the screen and then tap on the pause icon. To resume watching, you can tap on the play icon. To rewind live TV, you can use the slider bar at the bottom of the screen and drag it to the left. You can also use the rewind icon to go back by 10 seconds.

      -

      jio tv 6.0 2 apk download for android tv
      -jio tv 6.0 2 apk download latest version
      -jio tv 6.0 2 apk download free
      -jio tv 6.0 2 apk download mod
      -jio tv 6.0 2 apk download without jio sim
      -jio tv 6.0 2 apk download for firestick
      -jio tv 6.0 2 apk download for pc
      -jio tv 6.0 2 apk download for smart tv
      -jio tv 6.0 2 apk download old version
      -jio tv 6.0 2 apk download mirror link
      -jio tv 6.0 2 apk download get droid tips[^1^]
      -jio tv 6.0 2 apk download apkpure
      -jio tv 6.0 2 apk download uptodown
      -jio tv 6.0 2 apk download malavida
      -jio tv 6.0 2 apk download rexdl
      -jio tv 6.0 2 apk download androidapksfree
      -jio tv 6.0 2 apk download apkmirror
      -jio tv 6.0 2 apk download apkmody
      -jio tv 6.0 2 apk download happymod
      -jio tv 6.0 2 apk download revdl
      -jio tv 6.0 2 apk download for mi tv
      -jio tv 6.0 2 apk download for samsung smart tv
      -jio tv 6.0 2 apk download for lg smart tv
      -jio tv 6.0 2 apk download for sony smart tv
      -jio tv 6.0 2 apk download for tcl smart tv
      -jio tv 6.0 2 apk download for vu smart tv
      -jio tv 6.0 2 apk download for oneplus smart tv
      -jio tv 6.0 2 apk download for nokia smart tv
      -jio tv 6.0 2 apk download for realme smart tv
      -jio tv 6.0 2 apk download for hisense smart tv
      -jio tv 6.0 2 apk download for windows pc
      -jio tv 6.0 2 apk download for mac pc
      -jio tv 6.0 2 apk download for linux pc
      -jio tv 6.0 2 apk download for chromebook pc
      -jio tv 6.0 2 apk download for bluestacks pc
      -jio tv 6.0 2 apk download for nox player pc
      -jio tv 6.0

      -

      How to record and download shows

      -

      If you want to watch your favorite shows offline later, you can use the record and download feature of Jio TV. This feature allows you to record and download any show that is currently airing or scheduled to air in the future. To record a show, you need to tap on the screen and then tap on the record icon. You can choose to record the current episode or all episodes of the show. You can also set a timer for recording if you want to record only a part of the show.

      -

      To download a show, you need to go to the menu icon and then tap on "My Recordings". You will see a list of all your recorded shows. You can tap on any show and then tap on the download icon. You can choose to download in SD or HD quality depending on your network speed and storage space. You can also delete any recorded or downloaded show by tapping on the trash icon.

      -

      How to set reminders and favorites

      -

      If you don't want to miss any upcoming show that you are interested in, you can use the reminder feature of Jio TV. This feature allows you to set reminders for any show that is scheduled to air in the future. To set a reminder, you need to tap on the screen and then tap on the reminder icon. You can choose to set a reminder for 5 minutes, 15 minutes, or 30 minutes before the show starts. You will get a notification when the show is about to start.

      -

      If you have some favorite channels or genres that you watch frequently, you can use the favorite feature of Jio TV. This feature allows you to mark your favorite channels and genres for easy access. To mark a channel as favorite, you need to long press on it and then tap on the star icon. To mark a genre as favorite, you need to go to the menu icon and then tap on "My Favorites". You will see a list of all your favorite channels and genres.

      Pros and cons of Jio TV

      -

      Jio TV is a great app for watching live TV on your Android device, but it also has some drawbacks. Here are some of the pros and cons of Jio TV that you should consider before using it:

      -

      Pros

      -

      Wide range of channels and languages

      -

      Jio TV offers more than 600 channels in various languages and genres, such as news, sports, entertainment, movies, music, kids, devotional, etc. You can find channels for every taste and preference on Jio TV. You can also watch exclusive content from Jio Cinema, Jio Sports, and Jio Originals on Jio TV.

      -

      User-friendly interface and controls

      -

      Jio TV has a simple and user-friendly interface that allows you to browse and search for channels easily. You can also use various features and modes to enhance your viewing experience, such as pause and rewind live TV, record and download shows, set reminders and favorites, switch between SD and HD quality, use the mini player mode, use the picture-in-picture mode, and use the lock screen mode.

      -

      Free and exclusive content for Jio users

      -

      Jio TV is free for all Jio users who have a valid Jio SIM card and an internet connection. You don't need to pay any subscription fee or charges to watch live TV on Jio TV. You can also enjoy exclusive content from Jio Cinema, Jio Sports, and Jio Originals on Jio TV that is not available on other platforms.

      -

      Cons

      -

      Requires a Jio SIM card and internet connection

      -

      Jio TV only works if you have a Jio SIM card and an internet connection. You cannot use Jio TV with any other SIM card or network provider. You also need to have a good internet speed and data plan to watch live TV on Jio TV without any buffering or interruption.

      -

      May consume a lot of data and battery

      -

      Watching live TV on Jio TV may consume a lot of data and battery on your Android device. Depending on the quality and duration of your streaming, you may end up using a lot of your data plan and draining your battery quickly. You should always keep an eye on your data usage and battery level while using Jio TV.

      -

      May not work on some devices or regions

      -

      Jio TV may not work on some devices or regions due to compatibility or licensing issues. Some devices may not support the app or its features properly. Some regions may not have access to some channels or content due to geo-restrictions or regulations. You should always check the compatibility and availability of Jio TV before downloading and installing it.

      -

      Conclusion

      -

      Jio TV is an app that allows you to watch live TV channels on your Android device. It has many features that make it a great app for watching live TV, such as pause and rewind live TV, record and download shows, set reminders and favorites, switch between SD and HD quality, use the mini player mode, use the picture-in-picture mode, and use the lock screen mode. It also offers more than 600 channels in various languages and genres, as well as exclusive content from Jio Cinema, Jio Sports, and Jio Originals.

      -

      However, Jio TV also has some drawbacks that you should consider before using it, such as requiring a Jio SIM card and internet connection, consuming a lot of data and battery, and not working on some devices or regions. You should always check the compatibility and availability of Jio TV before downloading and installing it.

      -

      We hope this article has helped you understand what is Jio TV 6.0 2 APK download and how to watch live TV on your Android device with it. If you have any questions or feedback, please feel free to leave a comment below.

      -

      FAQs

      -
        -
      • Is Jio TV free?
      • -
      • Jio TV is free for all Jio users who have a valid Jio SIM card and an internet connection. You don't need to pay any subscription fee or charges to watch live TV on Jio TV.
      • -
      • How can I watch live TV on my PC with Jio TV?
      • -
      • You can watch live TV on your PC with Jio TV by using an Android emulator such as BlueStacks or Nox Player. You need to download and install the emulator on your PC and then download and install the Jio TV 6.0 2 APK file on it. Then you can launch the app and login with your Jio number and OTP or scan the QR code with another device that has a Jio SIM card. You can then watch live TV on your PC with Jio TV.
      • -
      • How can I watch Jio TV on my smart TV?
      • -
      • You can watch Jio TV on your smart TV by using a Chromecast device or a similar device that can cast your Android screen to your TV. You need to connect the device to your TV and then enable the cast option on your Android device. You can then open the Jio TV app and select the channel you want to watch. You can then see the live TV on your smart TV.
      • -
      • How can I update Jio TV to the latest version?
      • -
      • You can update Jio TV to the latest version by going to the Google Play Store or the official website of Jio TV and downloading the latest version of Jio TV 6.0 2 APK file. You can then install it on your device and enjoy the new features and improvements.
      • -
      • How can I contact Jio TV customer care?
      • -
      • You can contact Jio TV customer care by calling 1800-889-9999 or 198 from your Jio number. You can also email them at care@jio.com or chat with them on the MyJio app.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy Dolphin Emulator with Xbox Controllers Download and Install the Latest Drivers and Configurations.md b/spaces/fatiXbelha/sd/Enjoy Dolphin Emulator with Xbox Controllers Download and Install the Latest Drivers and Configurations.md deleted file mode 100644 index e3c9d43af0c4f67bf597d4eec84cd39947071ec1..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Dolphin Emulator with Xbox Controllers Download and Install the Latest Drivers and Configurations.md +++ /dev/null @@ -1,195 +0,0 @@ - -

      How to Download and Configure Dolphin Emulator Controller

      -

      If you are a fan of Nintendo GameCube and Wii games, you might have heard of Dolphin emulator. Dolphin is a free and open-source software that allows you to play these games on your PC with enhanced graphics, performance, and features. One of the best things about Dolphin is that it supports various types of controllers, including real and emulated ones.

      -

      In this article, we will show you how to download and configure Dolphin emulator controller for your PC. Whether you want to use your original GameCube or Wii controllers, or any other PC-compatible controllers, we will guide you through the steps and options you need to know.

      -

      download dolphin emulator controller


      Download Ziphttps://urllie.com/2uNBUy



      -

      Requirements for Dolphin Emulator Controller

      -

      Before you start downloading and configuring Dolphin emulator controller, you need to make sure you have the following hardware and software requirements:

      -
        -
      • A PC that meets the minimum or recommended system requirements for Dolphin emulator. You can check them here.
      • -
      • A controller that you want to use with Dolphin emulator. It can be a real GameCube or Wii controller, or any other PC-compatible controller, such as an Xbox, PlayStation, or Switch controller.
      • -
      • A controller adapter that allows you to connect your controller to your PC. If you are using a real GameCube controller, you need an official GameCube controller adapter for Wii U or its clones. If you are using a real Wii remote, you need a Bluetooth dongle that supports Toshiba or Broadcom drivers.
      • -
      • The latest version of Dolphin emulator for your operating system (Windows, Mac, or Linux). You can download it from the official website here.
      • -
      -

      How to Download Dolphin Emulator Controller

      -

      Downloading Dolphin emulator controller is easy and straightforward. All you need to do is follow these steps:

      -

      How to configure controllers in Dolphin emulator
      -Dolphin emulator controller settings guide
      -Dolphin emulator controller compatibility list
      -Dolphin emulator controller not working fix
      -Dolphin emulator controller setup tutorial
      -Dolphin emulator controller mapping tips
      -Dolphin emulator controller best practices
      -Dolphin emulator controller configuration window
      -Dolphin emulator controller options menu
      -Dolphin emulator controller input sources
      -How to use real GameCube controller in Dolphin emulator
      -How to use real Wii remote in Dolphin emulator
      -How to use GameCube adapter for Wii U in Dolphin emulator
      -How to use Bluetooth passthrough in Dolphin emulator
      -How to use DSU client in Dolphin emulator
      -How to enable speaker data in Dolphin emulator
      -How to use balance board in Dolphin emulator
      -How to use background input in Dolphin emulator
      -How to use emulated GameCube controller in Dolphin emulator
      -How to use emulated Wii remote in Dolphin emulator
      -How to connect Wii remotes for emulated controllers in Dolphin emulator
      -How to customize emulated GameCube controller buttons in Dolphin emulator
      -How to customize emulated Wii remote buttons in Dolphin emulator
      -How to adjust emulated GameCube controller sensitivity in Dolphin emulator
      -How to adjust emulated Wii remote sensitivity in Dolphin emulator
      -How to calibrate emulated GameCube controller analog sticks in Dolphin emulator
      -How to calibrate emulated Wii remote motion controls in Dolphin emulator
      -How to change emulated GameCube controller profile in Dolphin emulator
      -How to change emulated Wii remote profile in Dolphin emulator
      -How to save and load emulated GameCube controller configurations in Dolphin emulator
      -How to save and load emulated Wii remote configurations in Dolphin emulator
      -How to import and export emulated GameCube controller profiles in Dolphin emulator
      -How to import and export emulated Wii remote profiles in Dolphin emulator
      -How to reset emulated GameCube controller settings in Dolphin emulator
      -How to reset emulated Wii remote settings in Dolphin emulator
      -How to troubleshoot emulated GameCube controller issues in Dolphin emulator
      -How to troubleshoot emulated Wii remote issues in Dolphin emulator
      -How to update emulated GameCube controller drivers in Dolphin emulator
      -How to update emulated Wii remote drivers in Dolphin emulator
      -How to optimize emulated GameCube controller performance in Dolphin emulator
      -How to optimize emulated Wii remote performance in Dolphin emulator
      -Best emulated GameCube controller games for Dolphin emulator
      -Best emulated Wii remote games for Dolphin emulator
      -Best alternative controllers for Dolphin emulator
      -Best third-party controllers for Dolphin emulator
      -Best wireless controllers for Dolphin emulator
      -Best budget controllers for Dolphin emulator

      -
        -
      1. Go to the official Dolphin website here
      2. Choose the version of Dolphin emulator that matches your operating system (Windows, Mac, or Linux) and click on the download button.
      3. -
      4. Wait for the download to finish and then extract the zip file to a folder of your choice.
      5. -
      6. Open the Dolphin folder and double-click on the Dolphin.exe file to launch the emulator.
      7. -
      -

      Congratulations, you have successfully downloaded Dolphin emulator controller. Now, let's see how to configure it for your preferred controller type.

      -

      How to Configure Dolphin Emulator Controller

      -

      Configuring Dolphin emulator controller is not difficult, but it requires some attention and customization. Depending on whether you want to use a real or emulated controller for GameCube or Wii, you have different options and settings to choose from.

      -

      To access the controller settings, you need to click on the Controllers button from the Dolphin main window. This will open a new window where you can see four tabs: GameCube, Wii Remote 1, Wii Remote 2-4, and Advanced.

      -

      The GameCube tab allows you to configure the controllers for GameCube games. The Wii Remote tabs allow you to configure the controllers for Wii games. The Advanced tab allows you to adjust some global settings for all controllers.

      -

      In this article, we will focus on the GameCube and Wii Remote tabs, as they are the most relevant for Dolphin emulator controller. Let's start with the GameCube tab.

      -

      How to Configure Real GameCube Controller

      -

      If you have a real GameCube controller and an official GameCube controller adapter for Wii U or its clones, you can use them with Dolphin emulator. This will give you the most authentic and accurate experience of playing GameCube games on PC.

      -

      To configure a real GameCube controller, you need to follow these steps:

      -
        -
      1. Connect your GameCube controller adapter to your PC via USB. Make sure it is switched to Wii U mode if it has a switch.
      2. -
      3. Open Dolphin emulator and click on the Controllers button. Go to the GameCube tab and select "GameCube Adapter for Wii U" from the Port 1 dropdown menu. This will enable the native support for the adapter and detect your controller automatically.
      4. -
      5. If you want to use more than one real GameCube controller, repeat the same process for Port 2, 3, and 4.
      6. -
      7. Click on OK to save your settings and close the window.
      8. -
      -

      You have successfully configured your real GameCube controller with Dolphin emulator. However, depending on your operating system and driver version, you might need some additional steps to make it work properly. For a detailed guide on how to set up your GameCube controller adapter, please refer to this link.

      How to Configure Emulated GameCube Controller

      -

      If you don't have a real GameCube controller or an adapter, you can still use any PC-compatible controller as an emulated GameCube controller with Dolphin emulator. This will allow you to map the buttons and sticks of your controller to the GameCube controller layout.

      -

      To configure an emulated GameCube controller, you need to follow these steps:

      -
        -
      1. Connect your PC-compatible controller to your PC via USB or Bluetooth.
      2. -
      3. Open Dolphin emulator and click on the Controllers button. Go to the GameCube tab and select "Standard Controller" from the Port 1 dropdown menu. This will enable the emulation mode for the controller.
      4. -
      5. Click on the Configure button next to the Port 1 dropdown menu. This will open a new window where you can see the GameCube controller layout and the buttons and sticks of your PC-compatible controller.
      6. -
      7. Click on each button or stick on the GameCube controller layout and then press the corresponding button or stick on your PC-compatible controller. This will assign the input to the GameCube controller button or stick. You can also use your mouse to drag and drop the inputs.
      8. -
      9. If you want to adjust the sensitivity, dead zone, or radius of the sticks, you can use the sliders below the layout. You can also invert the axes of the sticks if needed.
      10. -
      11. If you want to use rumble feedback, you can check the Rumble box and choose a motor from your PC-compatible controller.
      12. -
      13. If you want to use more than one emulated GameCube controller, repeat the same process for Port 2, 3, and 4.
      14. -
      15. Click on OK to save your settings and close the window.
      16. -
      -

      You have successfully configured your emulated GameCube controller with Dolphin emulator. Here is an example of how it might look like with an Xbox One controller:

      -Emulated GameCube Controller Configuration Example -

      How to Configure Real Wii Remote

      -

      If you have a real Wii remote and a Bluetooth dongle that supports Toshiba or Broadcom drivers, you can use them with Dolphin emulator. This will give you the most realistic and immersive experience of playing Wii games on PC.

      -

      To configure a real Wii remote, you need to follow these steps:

      -
        -
      1. Connect your Bluetooth dongle to your PC via USB. Make sure it has the correct drivers installed. You can check this guide for more information on how to install them.
      2. -
      3. Open Dolphin emulator and click on the Controllers button. Go to one of the Wii Remote tabs (1-4) and select "Real Wii Remote" from the dropdown menu. This will enable the real Wii remote mode for that slot.
      4. -
      5. Press the red sync button on your Wii remote (located under the battery cover) and then click on Refresh in Dolphin. This will pair your Wii remote with your PC via Bluetooth.
      6. -
      7. If you want to use more than one real Wii remote, repeat the same process for another Wii Remote tab (2-4).
      8. -
      9. Click on OK to save your settings and close the window.
      10. -
      -

      You have successfully configured your real Wii remote with Dolphin emulator. However, depending on your Bluetooth dongle and Wii remote model, you might encounter some issues, such as lag, disconnection, or low battery. For a detailed guide on how to troubleshoot these issues, please refer to this link.

      How to Configure Emulated Wii Remote

      -

      If you don't have a real Wii remote or a Bluetooth dongle, you can still use any PC-compatible controller as an emulated Wii remote with Dolphin emulator. This will allow you to simulate the motion and pointer controls of the Wii remote with your controller buttons and sticks.

      -

      To configure an emulated Wii remote, you need to follow these steps:

      -
        -
      1. Connect your PC-compatible controller to your PC via USB or Bluetooth.
      2. -
      3. Open Dolphin emulator and click on the Controllers button. Go to one of the Wii Remote tabs (1-4) and select "Emulated Wii Remote" from the dropdown menu. This will enable the emulation mode for that slot.
      4. -
      5. Click on the Configure button next to the dropdown menu. This will open a new window where you can see the Wii remote layout and the buttons and sticks of your PC-compatible controller.
      6. -
      7. Click on each button or stick on the Wii remote layout and then press the corresponding button or stick on your PC-compatible controller. This will assign the input to the Wii remote button or stick. You can also use your mouse to drag and drop the inputs.
      8. -
      9. If you want to adjust the sensitivity, dead zone, or radius of the sticks, you can use the sliders below the layout. You can also invert the axes of the sticks if needed.
      10. -
      11. If you want to use rumble feedback, you can check the Rumble box and choose a motor from your PC-compatible controller.
      12. -
      13. If you want to use motion controls, you can check the Enable box under Motion Simulation and choose a source from your PC-compatible controller. You can also adjust the sensitivity and center of the motion simulation.
      14. -
      15. If you want to use pointer controls, you can check the Enable box under IR and choose a source from your PC-compatible controller. You can also adjust the sensitivity, center, width, height, and tilt of the pointer simulation.
      16. -
      17. If you want to use more than one emulated Wii remote, repeat the same process for another Wii Remote tab (2-4).
      18. -
      19. Click on OK to save your settings and close the window.
      20. -
      -

      You have successfully configured your emulated Wii remote with Dolphin emulator. Here is an example of how it might look like with an Xbox One controller:

      -Emulated Wii Remote Configuration Example -

      How to Test Dolphin Emulator Controller

      -

      Now that you have downloaded and configured Dolphin emulator controller, you are ready to test it with your favorite GameCube or Wii games. To do so, you need to follow these steps:

      -
        -
      1. Launch Dolphin emulator and click on Open from the main window. Browse to the folder where you have stored your GameCube or Wii game ISO files and select one.
      2. -
      3. Wait for the game to load and start playing. You should be able to control the game with your chosen controller type.
      4. -
      5. If you encounter any issues with your controller, such as lag, input errors, or compatibility problems, you can try some of these tips:
      6. -
          -
        • Make sure your controller is connected properly and has enough battery power.
        • -
        • Make sure your controller adapter or Bluetooth dongle has the correct drivers installed and is working properly.
        • -
        • Make sure your Dolphin emulator is updated to the latest version and has the correct settings for your controller type.
        • -
        • Check if your game is compatible with Dolphin emulator and your controller type. You can consult this compatibility list for more information.
        • -
        • Try adjusting the sensitivity, dead zone, or radius of your sticks or motion simulation if they are too sensitive or not responsive enough.
        • -
        • Try changing the input backend or API of your controller in Dolphin settings if they are causing lag or errors.
        • -
        -
      -

      You have successfully tested your Dolphin emulator controller with a game. Enjoy playing your favorite GameCube and Wii games on PC with enhanced graphics, performance, and features.

      -

      Conclusion

      -

      In this article, we have shown you how to download and configure Dolphin emulator controller for your PC. We have explained how to use real or emulated controllers for GameCube and Wii games with Dolphin emulator. We have also provided some tips on how to test and troubleshoot your controller issues.

      -

      We hope this article has been helpful and informative for you. Whether you prefer using your original GameCube or Wii controllers, or any other PC-compatible controllers, we hope you have a great time playing GameCube and Wii games on PC with Dolphin emulator.

      -

      If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      -

      Frequently Asked QuestionsFrequently Asked Questions

      -

      Here are some of the most common questions and answers related to Dolphin emulator controller:

      -

      What is the best controller for Dolphin emulator?

      -

      There is no definitive answer to this question, as different controllers have different advantages and disadvantages for different games and preferences. However, some general guidelines are:

      -
        -
      • If you want the most authentic and accurate experience, use a real GameCube or Wii controller with an adapter or Bluetooth dongle.
      • -
      • If you want the most versatile and customizable experience, use any PC-compatible controller as an emulated GameCube or Wii controller.
      • -
      • If you want the best graphics and performance, use a wired controller rather than a wireless one, as it reduces lag and interference.
      • -
      • If you want the best compatibility and support, use a controller that is widely used and recognized by Dolphin emulator, such as an Xbox, PlayStation, or Switch controller.
      • -
      -

      How do I use a keyboard and mouse as a controller for Dolphin emulator?

      -

      You can also use a keyboard and mouse as an emulated GameCube or Wii controller for Dolphin emulator. To do so, you need to follow these steps:

      -
        -
      1. Open Dolphin emulator and click on the Controllers button. Go to the GameCube or Wii Remote tab and select "Emulated Controller" from the dropdown menu.
      2. -
      3. Click on the Configure button next to the dropdown menu. This will open a new window where you can see the GameCube or Wii remote layout and the buttons and keys of your keyboard and mouse.
      4. -
      5. Click on each button or stick on the GameCube or Wii remote layout and then press the corresponding key or button on your keyboard or mouse. This will assign the input to the GameCube or Wii remote button or stick. You can also use your mouse to drag and drop the inputs.
      6. -
      7. If you want to adjust the sensitivity, dead zone, or radius of the sticks or motion simulation, you can use the sliders below the layout. You can also invert the axes of the sticks if needed.
      8. -
      9. If you want to use rumble feedback, you can check the Rumble box and choose a motor from your keyboard or mouse.
      10. -
      11. If you want to use motion controls, you can check the Enable box under Motion Simulation and choose a source from your keyboard or mouse. You can also adjust the sensitivity and center of the motion simulation.
      12. -
      13. If you want to use pointer controls, you can check the Enable box under IR and choose a source from your keyboard or mouse. You can also adjust the sensitivity, center, width, height, and tilt of the pointer simulation.
      14. -
      15. Click on OK to save your settings and close the window.
      16. -
      -

      You have successfully configured your keyboard and mouse as an emulated GameCube or Wii controller with Dolphin emulator. However, keep in mind that using a keyboard and mouse might not be very comfortable or intuitive for some games that require precise or complex inputs.

      -

      How do I update Dolphin emulator controller?

      -

      To update Dolphin emulator controller, you need to update Dolphin emulator itself. Dolphin emulator is constantly updated with new features, bug fixes, and compatibility improvements. To update Dolphin emulator, you need to follow these steps:

      -
        -
      1. Go to the official Dolphin website here and download the latest version of Dolphin emulator for your operating system.
      2. -
      3. Extract the zip file to a folder of your choice. You can overwrite your existing Dolphin folder or create a new one.
      4. -
      5. Open the new Dolphin folder and double-click on the Dolphin.exe file to launch the updated emulator.
      6. -
      -

      You have successfully updated Dolphin emulator controller. Your previous settings and games should be preserved, but you might need to adjust some options if they have changed in the new version.

      -

      How do I uninstall Dolphin emulator controller?

      -

      To uninstall Dolphin emulator controller, you need to uninstall Dolphin emulator itself. Dolphin emulator does not have an uninstaller program, so you need to manually delete its files and folders from your PC. To uninstall Dolphin emulator, you need to follow these steps:

      -
        -
      1. Close Dolphin emulator if it is running.
      2. -
      3. Go to the folder where you have installed Dolphin emulator and delete it. You can also delete any shortcuts or icons related to it.
      4. -
      5. If you want to delete your games, saves, screenshots, or other data related to Dolphin emulator, go to Documents\Dolphin Emulator (or wherever you have stored them) and delete them as well.
      6. -
      -

      You have successfully uninstalled Dolphin emulator controller. Your PC should be free of any traces of Dolphin emulator.

      -

      How do I contact Dolphin emulator support?How do I contact Dolphin emulator support?

      -

      If you have any questions, feedback, or issues related to Dolphin emulator controller or Dolphin emulator in general, you can contact Dolphin emulator support through various channels. Here are some of the ways you can reach them:

      -
        -
      • You can visit the official Dolphin website here and check the FAQ, guides, forums, blog, or wiki sections for more information and help.
      • -
      • You can join the official Dolphin Discord server here and chat with other users and developers of Dolphin emulator. You can also ask for support, report bugs, or suggest features there.
      • -
      • You can follow the official Dolphin Twitter account here and get the latest news and updates on Dolphin emulator. You can also tweet at them or send them a direct message if you have any questions or feedback.
      • -
      • You can subscribe to the official Dolphin YouTube channel here and watch videos of Dolphin emulator in action. You can also comment on the videos or send them a message if you have any questions or feedback.
      • -
      • You can support the development of Dolphin emulator by donating to them via PayPal or Patreon. You can find the links to do so on their website here.
      • -
      -

      You have successfully learned how to contact Dolphin emulator support. They are always happy to hear from you and help you with any issues or suggestions you might have.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy a Dinosaur Adventure with TAP! DIG! MY MUSEUM! APK Mod.md b/spaces/fatiXbelha/sd/Enjoy a Dinosaur Adventure with TAP! DIG! MY MUSEUM! APK Mod.md deleted file mode 100644 index ea8944397ad3a6cfc2c9c58c6ac4664abd338fca..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy a Dinosaur Adventure with TAP! DIG! MY MUSEUM! APK Mod.md +++ /dev/null @@ -1,111 +0,0 @@ - -

      Tap Dig My Museum APK Mod: A Fun and Educational Game for Dinosaur Lovers

      -

      Introduction

      -

      Do you love dinosaurs? Do you dream of becoming a paleontologist and discovering new species of ancient creatures? If so, you will love Tap Dig My Museum APK Mod, a casual simulation game that lets you create your own museum by unearthing fossils and putting them back together. In this article, we will tell you what Tap Dig My Museum APK Mod is, why you should play it, what features it offers, and how to download and install it on your device. Let's get started!

      -

      What is Tap Dig My Museum APK Mod?

      -

      Tap Dig My Museum APK Mod is a modified version of the original game Tap! Dig! My Museum!, developed by oridio. The game is available for Android devices and can be downloaded for free from various sources online. The modded version gives you unlimited money and gems, which you can use to upgrade your museum and unlock new fossils faster. The game has a simple gameplay mechanic: you tap on the screen to dig up fossils, then drag them to the right place to assemble them. You can also customize your museum with different decorations, exhibits, and facilities.

      -

      tap dig my museum apk mod


      Download ✑ ✑ ✑ https://urllie.com/2uNFdi



      -

      Why should you play Tap Dig My Museum APK Mod?

      -

      Tap Dig My Museum APK Mod is a fun and educational game that will appeal to anyone who loves dinosaurs and history. Here are some reasons why you should play it:

      -
        -
      • It is relaxing and satisfying. You can enjoy the process of digging up fossils and seeing them come to life in your museum.
      • -
      • It is informative and educational. You can learn about different types of dinosaurs, their names, their habitats, and their characteristics.
      • -
      • It is creative and customizable. You can design your museum according to your preferences and style.
      • -
      • It is challenging and rewarding. You can complete various missions and achievements, and earn coins and gems to unlock new fossils and items.
      • -
      -

      Features of Tap Dig My Museum APK Mod

      -

      Unearth and assemble dinosaur fossils

      -

      The main feature of Tap Dig My Museum APK Mod is the fossil excavation and assembly. You can tap on the screen to dig up fossils from different layers of soil, then drag them to the right place to form a complete skeleton. You can also rotate and zoom in on the fossils to see them better. There are over 100 types of fossils to collect, ranging from common ones like Tyrannosaurus rex and Triceratops, to rare ones like Spinosaurus and Pteranodon. You can also see the names and descriptions of each fossil in your collection.

      -

      Customize and upgrade your museum

      -

      Another feature of Tap Dig My Museum APK Mod is the museum customization and upgrade. You can decorate your museum with various items, such as walls, floors, windows, doors, plants, lights, signs, statues, paintings, and more. You can also upgrade your museum with different facilities, such as a cafe, a gift shop, a restroom, a security room, a research lab, and more. You can also change the theme of your museum, such as natural history, science fiction, fantasy, horror, or mystery.

      -

      tap dig my museum mod apk unlimited money
      -tap dig my museum hack apk download
      -tap dig my museum apk mod latest version
      -tap dig my museum mod apk android 1
      -tap dig my museum apk mod free download
      -tap dig my museum mod apk rexdl
      -tap dig my museum apk mod offline
      -tap dig my museum mod apk no ads
      -tap dig my museum apk mod 1.8.7
      -tap dig my museum mod apk revdl
      -tap dig my museum hack apk 2023
      -tap dig my museum apk mod unlimited gems
      -tap dig my museum mod apk happymod
      -tap dig my museum apk mod online
      -tap dig my museum mod apk 1.8.6
      -tap dig my museum hack apk ios
      -tap dig my museum apk mod unlocked
      -tap dig my museum mod apk an1
      -tap dig my museum apk mod 1.8.5
      -tap dig my museum mod apk pure
      -tap dig my museum hack apk android
      -tap dig my museum apk mod unlimited coins
      -tap dig my museum mod apk apkpure
      -tap dig my museum apk mod no root
      -tap dig my museum mod apk 1.8.4
      -tap dig my museum hack apk pc
      -tap dig my museum apk mod all dinosaurs
      -tap dig my museum mod apk platinmods
      -tap dig my museum apk mod 1.8.3
      -tap dig my museum mod apk mob.org
      -tap dig my museum hack apk online
      -tap dig my museum apk mod unlimited everything
      -tap dig my museum mod apk blackmod
      -tap dig my museum apk mod no verification
      -tap dig my museum mod apk 1.8.2
      -tap dig my museum hack apk unlimited money and gems
      -tap dig my museum apk mod all unlocked
      -tap dig my museum mod apk andropalace
      -tap dig my museum apk mod 1.8.1
      -tap dig my museum mod apk android republic

      -

      Collect coins and gems

      -

      A third feature of Tap Dig My Museum APK Mod is the coin and gem collection. You can earn coins by completing missions, achievements, daily quests, and mini-games. You can also get gems by watching ads, rating the game, or using the modded version. You can use coins and gems to buy new fossils, items, and upgrades for your museum. You can also use them to speed up the digging process or skip the waiting time.

      -

      Enjoy the retro pixel art style

      -

      A fourth feature of Tap Dig My Museum APK Mod is the retro pixel art style. The game has a charming and nostalgic look, with colorful and pixelated graphics. The game also has a catchy and upbeat soundtrack, and cute sound effects. The game is easy to play and suitable for all ages.

      -

      How to download and install Tap Dig My Museum APK Mod?

      -

      If you want to play Tap Dig My Museum APK Mod on your device, you need to follow these simple steps:

      -

      Step 1: Download the APK file from a trusted source

      -

      The first step is to download the APK file of Tap Dig My Museum APK Mod from a trusted source. You can find many websites that offer the download link, but make sure they are safe and reliable. You can also scan the file with an antivirus program before downloading it. The file size is about 50 MB, so it should not take too long to download.

      -

      Step 2: Enable unknown sources on your device

      -

      The second step is to enable unknown sources on your device. This is necessary because the game is not available on the official Google Play Store, and you need to allow your device to install apps from other sources. To do this, go to your device settings, then security, then unknown sources, and turn it on. You may also need to confirm this action with a pop-up message.

      -

      Step 3: Install the APK file and launch the game

      -

      The third step is to install the APK file and launch the game. To do this, locate the downloaded file on your device, tap on it, and follow the instructions on the screen. It should take a few seconds to install the game. Once it is done, you can open the game and enjoy it.

      -

      Conclusion

      -

      Tap Dig My Museum APK Mod is a fun and educational game that will keep you entertained for hours. You can dig up fossils, assemble them, customize your museum, collect coins and gems, and learn about dinosaurs. The game has a simple gameplay mechanic, a retro pixel art style, and a modded version that gives you unlimited money and gems. If you love dinosaurs and history, you should definitely try this game.

      -

      Summary of the main points

      -

      In this article, we have covered:

      -
        -
      • What is Tap Dig My Museum APK Mod?
      • -
      • Why should you play Tap Dig My Museum APK Mod?
      • -
      • What features does Tap Dig My Museum APK Mod offer?
      • -
      • How to download and install Tap Dig My Museum APK Mod?
      • -
      -

      Call to action

      -

      If you are interested in playing Tap Dig My Museum APK Mod, you can download it from one of the links below. Have fun digging up fossils and creating your own museum!

      - -

      Frequently Asked Questions

      -

      Here are some of the most common questions that people ask about Tap Dig My Museum APK Mod:

      -
        -
      1. Is Tap Dig My Museum APK Mod safe to use?
      2. -

        Yes, Tap Dig My Museum APK Mod is safe to use as long as you download it from a trusted source. However, you should always be careful when installing apps from unknown sources, as they may contain viruses or malware. You should also scan the file with an antivirus program before installing it.

        -
      3. Is Tap Dig My Museum APK Mod compatible with my device?
      4. -

        Tap Dig My Museum APK Mod is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may experience performance issues or glitches due to different specifications or settings. If you encounter any problems while playing the game, you can try adjusting the graphics quality or clearing the cache.

        -
      5. How can I update Tap Dig My Museum APK Mod?
      6. -

        To update Tap Dig My Museum APK Mod, you need to download the latest version of the APK file from a trusted source and install it over the existing one. You do not need to uninstall the previous version or lose your progress. However, you should always back up your data before updating, just in case something goes wrong.

        -
      7. How can I contact the developer of Tap Dig My Museum APK Mod?
      8. -

        If you have any questions, suggestions, feedback, or issues regarding Tap Dig My Museum APK Mod, you can contact the developer of the original game, oridio, by email at oridioinc@gmail.com. You can also visit their website at https://oridio.net/ or follow them on Twitter at @oridioinc.

        -
      9. What are some similar games to Tap Dig My Museum APK Mod?
      10. -

        If you enjoy Tap Dig My Museum APK Mod, you may also like some of these similar games:

        - -
      -

      I hope you found this article helpful and informative. If you did, please share it with your friends and family who might also be interested in playing Tap Dig My Museum APK Mod. Thank you for reading and have a great day!

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Experience the Glory of God with King of Kings Majesty Download Now.md b/spaces/fatiXbelha/sd/Experience the Glory of God with King of Kings Majesty Download Now.md deleted file mode 100644 index 3d237be5be53b145bf15315ee1616f2b543fece1..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Experience the Glory of God with King of Kings Majesty Download Now.md +++ /dev/null @@ -1,110 +0,0 @@ - -

      How to Download King of Kings Majesty, a Beautiful Worship Song

      -

      If you are looking for a way to download King of Kings Majesty, a beautiful worship song by Jarrod Cooper, you have come to the right place. In this article, we will tell you what this song is about, why you should listen to worship music, and how to download it from YouTube legally and easily.

      -

      download king of kings majesty


      Download Ziphttps://urllie.com/2uNzgn



      -

      What is King of Kings Majesty?

      -

      King of Kings Majesty is a worship song that was written by British author and songwriter Jarrod Cooper in 1996. It is a song that expresses the awe and love for God, who is the King of kings and the Lord of lords. The song also declares the majesty and glory of God, who came from heaven to earth, died on the cross, rose from the dead, and reigns forever.

      -

      The origin and meaning of the song

      -

      Jarrod Cooper wrote this song after he had a vision of Jesus in his bedroom. He said that he saw Jesus as the King of kings, wearing a crown and a robe, and sitting on a throne. He felt an overwhelming sense of God's presence and power, and he started to sing the chorus of the song spontaneously. He later wrote the verses based on the scriptures that describe the attributes and works of God.

      -

      The popularity and impact of the song

      -

      Since its release, King of Kings Majesty has become one of the most popular and widely sung worship songs in the world. It has been translated into many languages and recorded by various artists, such as Hillsong Worship, Ingrid DuMosch, Graham Kendrick, and others. It has also been used in many churches, conferences, events, and media platforms. Many people have testified that this song has touched their lives and brought them closer to God.

      -

      Why Should You Listen to Worship Music?

      -

      Worship music is not just a genre or a style of music. It is a way of expressing your love and devotion to God with your whole being. It is also a way of connecting with God in unique ways and experiencing His presence and power in your life. Here are some of the benefits of listening to worship music:

      -

      The benefits of worship music for your faith, mind, and heart

      -
        -
      • Worship music teaches you the gospel and compacts your faith. It helps you learn and remember the truths and promises of God through catchy melodies and lyrics. It also reinforces your beliefs and convictions as you sing along with faith.
      • -
      • Worship music connects you to God in unique ways. It allows you to communicate with God in a personal and intimate way. It also enables you to hear from God as He speaks to you through His Spirit and His word.
      • -
      • Worship music allows you to express your love to God with your whole being. It helps you to praise God with your heart, soul, mind, and strength. It also helps you to surrender your will, desires, fears, and worries to God.
      • -
      -

      The biblical commands and examples of singing praises to God

      -

      The Bible is full of commands and examples of singing praises to God. Here are some of them:

      -

      download king of kings majesty sheet music
      -download king of kings majesty jarrod cooper
      -download king of kings majesty praise online
      -download king of kings majesty mp3
      -download king of kings majesty lyrics
      -download king of kings majesty hymn
      -download king of kings majesty piano
      -download king of kings majesty guitar chords
      -download king of kings majesty worship song
      -download king of kings majesty video
      -download king of kings majesty instrumental
      -download king of kings majesty organ
      -download king of kings majesty by beth croft
      -download king of kings majesty duet
      -download king of kings majesty choir version
      -download king of kings majesty sovereign lifestyle music
      -download king of kings majesty anglican hymns old and new
      -download king of kings majesty singing the faith
      -download king of kings majesty irish presbyterian hymnbook
      -download king of kings majesty godsongs.net
      -download king of kings majesty hymnary.org
      -download king of kings majesty pdf
      -download king of kings majesty midi
      -download king of kings majesty powerpoint
      -download king of kings majesty karaoke
      -download king of kings majesty backing track
      -download king of kings majesty acoustic version
      -download king of kings majesty live performance
      -download king of kings majesty cover song
      -download king of kings majesty with lyrics on screen
      -download king of kings majesty free online
      -download king of kings majesty for android
      -download king of kings majesty for iphone
      -download king of kings majesty for windows 10
      -download king of kings majesty for macbook pro
      -download king of kings majesty for ipad pro
      -download king of kings majesty for kindle fire hd 10
      -download king of kings majesty for samsung galaxy s21 ultra 5g
      -download king of kings majesty for spotify premium
      -download king of kings majesty for apple music
      -download king of kings majesty for youtube music
      -download king of kings majesty for amazon music
      -download king of kings majesty for deezer
      -download king of kings majesty for tidal
      -download king of kings majesty for soundcloud
      -download king of kings majesty for napster
      -download king of kings majesty for pandora
      -download king of kings majesty for iheartradio

      -
        -
      • Psalm 96:1-2 says, "Sing to the Lord a new song; sing to the Lord, all the earth. Sing to the Lord, praise his name; proclaim his salvation day after day."
      • -
      • Ephesians 5:19 says, "Speak to one another with psalms, hymns, and songs from the Spirit. Sing and make music from your heart to the Lord."Colossians 3:16 says, "Let the message of Christ dwell among you richly as you teach and admonish one another with all wisdom through psalms, hymns, and songs from the Spirit, singing to God with gratitude in your hearts."
      • -
      • Revelation 5:9-10 says, "And they sang a new song, saying: 'You are worthy to take the scroll and to open its seals, because you were slain, and with your blood you purchased for God persons from every tribe and language and people and nation. You have made them to be a kingdom and priests to serve our God, and they will reign on the earth.'"
      • -
      -

      How to Download King of Kings Majesty from YouTube

      -

      YouTube is one of the most popular and accessible platforms to listen to worship music online. You can find many versions and covers of King of Kings Majesty on YouTube, such as this one by Jarrod Cooper himself: https://www.youtube.com/watch?v=8iMA3XBH9bc. However, what if you want to download the song and listen to it offline? Here are some ways to do that:

      -

      The legal and ethical issues of downloading music from YouTube

      -

      Before you download any music from YouTube, you should be aware of the legal and ethical issues involved. Downloading music from YouTube without the permission of the copyright owner or the platform is illegal and violates the terms of service of YouTube. It also deprives the artists and creators of their rightful income and recognition. Therefore, you should respect the rights and wishes of the original owners and only download music from YouTube if they allow it or if you have a valid reason.

      -

      The official way to download music from YouTube with YouTube Music Premium

      -

      The official way to download music from YouTube is to subscribe to YouTube Music Premium, a paid service that lets you enjoy ad-free music, offline listening, background play, and more. With YouTube Music Premium, you can download any song or playlist from YouTube and listen to it offline on your device. Here are the steps to do that:

      -
        -
      1. Go to https://music.youtube.com/ and sign in with your Google account.
      2. -
      3. Search for King of Kings Majesty or any other song you want to download.
      4. -
      5. Select the song or playlist and tap on the download icon.
      6. -
      7. Choose the audio quality and confirm your download.
      8. -
      9. Enjoy listening to your downloaded music offline.
      10. -
      -

      The alternative ways to download music from YouTube with third-party apps and websites

      -

      If you don't want to pay for YouTube Music Premium, there are some alternative ways to download music from YouTube with third-party apps and websites. However, these methods are not endorsed by YouTube and may not be safe or reliable. Use them at your own risk and discretion. Here are some examples of these methods:

      -
        -
      • You can use an app like https://ytmp3.cc/en13/, which allows you to convert any YouTube video into an MP3 file and download it for free. All you need to do is paste the URL of the video, choose the format, and click on convert.
      • -
      • You can use a website like https://www.y2mate.com/en68, which allows you to download any YouTube video or audio in various formats and qualities. All you need to do is paste the URL of the video, choose the option, and click on download.
      • -
      • You can use a browser extension like https://addoncrop.com/youtube-video-downloader/, which allows you to download any YouTube video or audio directly from your browser. All you need to do is install the extension, go to the video page, and click on the download button.
      • -
      -

      Conclusion

      -

      In conclusion, King of Kings Majesty is a beautiful worship song that praises God for His majesty and glory. It is a song that can inspire you, uplift you, and draw you closer to God. If you want to download this song from YouTube, you can either use the official way with YouTube Music Premium or use some alternative ways with third-party apps and websites. However, you should always respect the rights and wishes of the original owners and only download music from YouTube legally and ethically.

      -

      FAQs

      -
        -
      • Q: Who wrote King of Kings Majesty?
      • -
      • A: King of Kings Majesty was written by Jarrod Cooper, a British author and songwriter.
      • -
      • Q: When was King of Kings Majesty released?
      • -
      • A: King of Kings Majesty was released in 1996 as part of Jarrod Cooper's album King of Kings, Majesty.
      • -
      • Q: What are some of the scriptures that inspired King of Kings Majesty?
      • -
      • A: Some of the scriptures that inspired King of Kings Majesty are Revelation 19:16, Philippians 2:9-11, John 1:14, 1 Corinthians 15:3-4, and Revelation 11:15.
      • -
      • Q: How much does YouTube Music Premium cost?
      • -
      • A: YouTube Music Premium costs $9.99 per month for an individual plan, $14.99 per month for a family plan, and $4.99 per month for a student plan.
      • -
      • Q: What are some of the risks of using third-party apps and websites to download music from YouTube?
      • -
      • A: Some of the risks of using third-party apps and websites to download music from YouTube are malware, viruses, spyware, adware, pop-ups, legal issues, quality issues, and privacy issues.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/models/megatron_t5/configuration_megatron_t5.py b/spaces/fclong/summary/fengshen/models/megatron_t5/configuration_megatron_t5.py deleted file mode 100644 index 18b960e947cfd162d79d6b017fb77e30707c4c2e..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/models/megatron_t5/configuration_megatron_t5.py +++ /dev/null @@ -1,255 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" T5 model configuration """ -from collections import OrderedDict -from typing import Any, Dict, Iterable, Mapping, Optional - -from transformers import PreTrainedTokenizer, TensorType - -from transformers import is_torch_available -from transformers.configuration_utils import PretrainedConfig -from transformers.onnx import OnnxConfigWithPast -from transformers.utils import logging - - -logger = logging.get_logger(__name__) - -T5_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "T5-small": "https://huggingface.co/T5-small/resolve/main/config.json", - "T5-base": "https://huggingface.co/T5-base/resolve/main/config.json", - "T5-large": "https://huggingface.co/T5-large/resolve/main/config.json", - "T5-3b": "https://huggingface.co/T5-3b/resolve/main/config.json", - "T5-11b": "https://huggingface.co/T5-11b/resolve/main/config.json", -} - - -class T5Config(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a :class:`~transformers.T5Model` or a - :class:`~transformers.TFT5Model`. It is used to instantiate a T5 model according to the specified arguments, - defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration - to that of the T5 `T5-small `__ architecture. - - Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model - outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information. - - Arguments: - vocab_size (:obj:`int`, `optional`, defaults to 32128): - Vocabulary size of the T5 model. Defines the number of different tokens that can be represented by the - :obj:`inputs_ids` passed when calling :class:`~transformers.T5Model` or :class:`~transformers.TFT5Model`. - d_model (:obj:`int`, `optional`, defaults to 512): - Size of the encoder layers and the pooler layer. - d_kv (:obj:`int`, `optional`, defaults to 64): - Size of the key, query, value projections per attention head. :obj:`d_kv` has to be equal to :obj:`d_model - // num_heads`. - d_ff (:obj:`int`, `optional`, defaults to 2048): - Size of the intermediate feed forward layer in each :obj:`T5Block`. - num_layers (:obj:`int`, `optional`, defaults to 6): - Number of hidden layers in the Transformer encoder. - num_decoder_layers (:obj:`int`, `optional`): - Number of hidden layers in the Transformer decoder. Will use the same value as :obj:`num_layers` if not - set. - num_heads (:obj:`int`, `optional`, defaults to 8): - Number of attention heads for each attention layer in the Transformer encoder. - relative_attention_num_buckets (:obj:`int`, `optional`, defaults to 32): - The number of buckets to use for each attention layer. - dropout_rate (:obj:`float`, `optional`, defaults to 0.1): - The ratio for all dropout layers. - layer_norm_eps (:obj:`float`, `optional`, defaults to 1e-6): - The epsilon used by the layer normalization layers. - initializer_factor (:obj:`float`, `optional`, defaults to 1): - A factor for initializing all weight matrices (should be kept to 1, used internally for initialization - testing). - feed_forward_proj (:obj:`string`, `optional`, defaults to :obj:`"relu"`): - Type of feed forward layer to be used. Should be one of :obj:`"relu"` or :obj:`"gated-gelu"`. T5v1.1 uses - the :obj:`"gated-gelu"` feed forward projection. Original T5 uses :obj:`"relu"`. - use_cache (:obj:`bool`, `optional`, defaults to :obj:`True`): - Whether or not the model should return the last key/values attentions (not used by all models). - gradient_checkpointing (:obj:`bool`, `optional`, defaults to :obj:`False`): - If True, use gradient checkpointing to save memory at the expense of slower backward pass. - """ - model_type = "T5" - keys_to_ignore_at_inference = ["past_key_values"] - - def __init__( - self, - vocab_size=32128, - d_model=512, - d_kv=64, - d_ff=2048, - num_layers=6, - num_decoder_layers=None, - num_heads=8, - relative_attention_num_buckets=32, - dropout_rate=0.1, - layer_norm_epsilon=1e-5, - initializer_factor=1.0, - feed_forward_proj="gelu", - is_encoder_decoder=True, - use_cache=True, - pad_token_id=0, - eos_token_id=1, - gradient_checkpointing=False, - **kwargs - ): - super().__init__( - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - is_encoder_decoder=is_encoder_decoder, - **kwargs, - ) - self.vocab_size = vocab_size - self.d_model = d_model - self.d_kv = d_kv - self.d_ff = d_ff - self.num_layers = num_layers - self.num_decoder_layers = ( - num_decoder_layers if num_decoder_layers is not None else self.num_layers - ) # default = symmetry - self.num_heads = num_heads - self.relative_attention_num_buckets = relative_attention_num_buckets - self.dropout_rate = dropout_rate - self.layer_norm_epsilon = layer_norm_epsilon - self.initializer_factor = initializer_factor - self.feed_forward_proj = feed_forward_proj - self.use_cache = use_cache - self.gradient_checkpointing = gradient_checkpointing - - @property - def hidden_size(self): - return self.d_model - - @property - def num_attention_heads(self): - return self.num_heads - - @property - def num_hidden_layers(self): - return self.num_layers - - -class T5OnnxConfig(OnnxConfigWithPast): - @property - def inputs(self) -> Mapping[str, Mapping[int, str]]: - common_inputs = OrderedDict( - [ - ("input_ids", {0: "batch", 1: "encoder_sequence"}), - ("attention_mask", {0: "batch", 1: "encoder_sequence"}), - ("decoder_input_ids", {0: "batch"}), - ("decoder_attention_mask", {0: "batch"}), - ] - ) - - if self.use_past: - for i in range(0, self._config.num_layers): - common_inputs[f"past_key_values.{i}.decoder.key"] = { - 0: "batch", 2: "past_sequence"} - common_inputs[f"past_key_values.{i}.decoder.value"] = { - 0: "batch", 2: "past_sequence"} - common_inputs[f"past_key_values.{i}.encoder.key"] = { - 0: "batch", 2: "past_sequence"} - common_inputs[f"past_key_values.{i}.encoder.value"] = { - 0: "batch", 2: "past_sequence"} - - return common_inputs - - @property - def outputs(self) -> Mapping[str, Mapping[int, str]]: - common_outputs = super().outputs - - if "last_hidden_state" in common_outputs: - common_outputs["last_hidden_state"] = { - 0: "batch", 1: "decoder_sequence"} - - if self.use_past: - for i in range(self._config.num_layers): - common_outputs[f"present.{i}.decoder.key"] = { - 0: "batch", 2: "decoder_sequence"} - common_outputs[f"present.{i}.decoder.value"] = { - 0: "batch", 2: "decoder_sequence"} - common_outputs[f"present.{i}.encoder.key"] = { - 0: "batch", 2: "encoder_sequence"} - common_outputs[f"present.{i}.encoder.value"] = { - 0: "batch", 2: "encoder_sequence"} - - if self.task == "default": - common_outputs["encoder_last_hidden_state"] = { - 0: "batch", 2: "encoder_sequence"} - - return common_outputs - - def generate_dummy_inputs( - self, - tokenizer: PreTrainedTokenizer, - batch_size: int = -1, - seq_length: int = -1, - is_pair: bool = False, - framework: Optional[TensorType] = None, - ) -> Mapping[str, Any]: - - # Generate encoder inputs - encoder_inputs = super().generate_dummy_inputs( - tokenizer, batch_size, seq_length, is_pair, framework) - - # Generate decoder inputs - decoder_inputs = super().generate_dummy_inputs( - tokenizer, batch_size, 1, is_pair, framework) - decoder_inputs = {f"decoder_{name}": tensor for name, - tensor in decoder_inputs.items()} - - ordered_inputs = dict(**encoder_inputs, **decoder_inputs) - if self.use_past: - if not is_torch_available(): - raise ValueError( - "Cannot generate dummy past_keys inputs without PyTorch installed.") - else: - import torch - batch = encoder_inputs["input_ids"].shape[0] - encoder_seq_length = encoder_inputs["input_ids"].shape[1] - encoder_shape = ( - batch, - self._config.num_heads, - encoder_seq_length, - self._config.hidden_size // self._config.num_heads, - ) - decoder_shape = (batch, self._config.num_heads, 1, - self._config.hidden_size // self._config.num_heads) - - ordered_inputs["past_key_values"] = [] - for _ in range(self._config.num_layers): - ordered_inputs["past_key_values"].append( - ( - torch.zeros(decoder_shape), - torch.zeros(decoder_shape), - torch.zeros(encoder_shape), - torch.zeros(encoder_shape), - ) - ) - - return ordered_inputs - - @staticmethod - def flatten_output_collection_property(name: str, field: Iterable[Any]) -> Dict[str, Any]: - if name in ["present", "past_key_values"]: - flatten_output = {} - for idx, t in enumerate(field): - flatten_output[f"{name}.{idx}.decoder.key"] = t[0] - flatten_output[f"{name}.{idx}.decoder.value"] = t[1] - flatten_output[f"{name}.{idx}.encoder.key"] = t[2] - flatten_output[f"{name}.{idx}.encoder.value"] = t[3] - - return flatten_output - - return super().flatten_output_collection_property(name, field) diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Blue WA Plus - Enhance Your WhatsApp Chatting with More Privacy and Security.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Blue WA Plus - Enhance Your WhatsApp Chatting with More Privacy and Security.md deleted file mode 100644 index 250f1e764434aacf9291b2a526fd755d02bc685c..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Blue WA Plus - Enhance Your WhatsApp Chatting with More Privacy and Security.md +++ /dev/null @@ -1,120 +0,0 @@ -
      -

      Download Blue WA Plus: A WhatsApp Mod with Amazing Features

      -

      WhatsApp is one of the most popular messaging apps in the world, with over 2 billion users. However, many people are not satisfied with the limited features and options that WhatsApp offers. If you are one of them, you might want to try Blue WA Plus, a WhatsApp mod that gives you more control and customization over your WhatsApp experience.

      -

      download blue wa plus


      Download Zip >>> https://gohhs.com/2uPmRk



      -

      Blue WA Plus is a modified version of WhatsApp that adds many features and functions that are not available in the official app. You can change the appearance of your WhatsApp, send large files and high-quality media, hide your online status and read receipts, backup and restore your chats, and much more. In this article, we will show you how to download and install Blue WA Plus on your Android device, what are the features of this mod, and what are the pros and cons of using it.

      -

      How to Download and Install Blue WA Plus on Your Android Device

      -

      Before you can enjoy the benefits of Blue WA Plus, you need to download and install it on your Android device. Here are the steps you need to follow:

      -
        -
      1. First, you need to enable unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
      2. -
      3. Next, you need to download the Blue WA Plus APK file from a trusted source. You can use this link to download the latest version of the mod.
      4. -
      5. Once the download is complete, locate the APK file in your device's storage and tap on it to install it.
      6. -
      7. Follow the instructions on the screen to complete the installation process.
      8. -
      9. After the installation is done, open the Blue WA Plus app and enter your phone number to verify it.
      10. -
      11. You can now enjoy using Blue WA Plus on your device.
      12. -
      -

      What are the Features of Blue WA Plus?

      -

      Blue WA Plus has many features that make it stand out from the official WhatsApp app. Here are some of them:

      -

      Customizable Themes and Fonts

      -

      One of the most attractive features of Blue WA Plus is that you can customize the appearance of your WhatsApp with different themes and fonts. You can choose from hundreds of themes that are available in the mod or create your own theme using the theme maker tool. You can also change the font size, style, and color of your chats according to your preference.

      -

      Increased File Size Limit and Quality

      -

      Another feature of Blue WA Plus is that you can send large files and high-quality images and videos without compression. The official WhatsApp app limits the file size to 16 MB for images and videos, which reduces their quality. With Blue WA Plus, you can send files up to 50 MB for images and videos, which preserves their original quality. You can also send documents up to 100 MB in size.

      -

      Privacy and Security Options

      -

      If you value your privacy and security, you will love the options that Blue WA Plus offers. You can hide your online status, blue ticks, typing indicator, and more from others. You can also lock your chats with a password or fingerprint to prevent unauthorized access. You can also enable end-to-end encryption for your chats to ensure that no one can read them except you and the person you are chatting with.

      -

      Backup and Restore Chats

      -

      With Blue WA Plus, you can backup and restore your chats easily using Google Drive or local storage. You can choose to backup your chats daily, weekly, or monthly, or manually whenever you want. You can also restore your chats from any previous backup without losing any data. This way, you can switch devices or reinstall the app without worrying about losing your chats.

      -

      download blue wa plus apk
      -download blue wa plus latest version
      -download blue wa plus anti banned
      -download blue wa plus 2022
      -download blue wa plus for android
      -download blue wa plus mod apk
      -download blue wa plus terbaru
      -download blue wa plus free
      -download blue wa plus update
      -download blue wa plus 9.45
      -download blue wa plus jalantikus
      -download blue wa plus apkcombo
      -download blue wa plus app
      -download blue wa plus pro
      -download blue wa plus premium
      -download blue wa plus 2023
      -download blue wa plus no ads
      -download blue wa plus official website
      -download blue wa plus from unknown sources
      -download blue wa plus without root
      -download blue wa plus with stickers
      -download blue wa plus with themes
      -download blue wa plus with privacy features
      -download blue wa plus with dual account
      -download blue wa plus with video call
      -download blue wa plus with status saver
      -download blue wa plus with auto reply
      -download blue wa plus with message scheduler
      -download blue wa plus with custom fonts
      -download blue wa plus with emoji changer
      -download blue wa plus with media mods
      -download blue wa plus with lock feature
      -download blue wa plus with hide online status
      -download blue wa plus with hide last seen
      -download blue wa plus with hide typing status
      -download blue wa plus with hide recording status
      -download blue wa plus with hide view status
      -download blue wa plus with hide second tick
      -download blue wa plus with hide delivered tick
      -download blue wa plus with hide read receipt
      -download blue wa plus with hide forwarded tag
      -download blue wa plus with hide revoked message tag
      -download blue wa plus with disable calls feature
      -download blue wa plus with disable voice notes feature
      -download blue wa plus with disable image compression feature
      -download blue wa plus with increase forward limit feature
      -download blue wa plus with increase group members limit feature

      -

      Other Features

      -

      Some other features of Blue WA Plus that you might find useful are:

      -
        -
      • Anti-ban: Blue WA Plus is designed to avoid getting banned by WhatsApp. You can use it safely without any risk of losing your account.
      • -
      • Auto-reply: Blue WA Plus allows you to set auto-reply messages for specific contacts or groups. You can customize the message, time, and frequency of the auto-reply.
      • -
      • Stickers: Blue WA Plus supports stickers from various sources, such as WhatsApp, Telegram, Facebook, etc. You can also create your own stickers using the sticker maker tool.
      • -
      • And more: Blue WA Plus has many more features that you can explore and enjoy, such as group video calls, broadcast messages, status downloader, DND mode, etc.
      • -
      -

      What are the Pros and Cons of Blue WA Plus?

      -

      As with any modded app, Blue WA Plus has its pros and cons. Here are some of them:

      - - - - - - - - - - - - - - - - - - - - - -
      ProsCons
      More features and options than the official WhatsApp appPossible compatibility issues with some devices or apps
      More customization and personalization of your WhatsAppPossible security risks from downloading from unknown sources
      More control and privacy over your WhatsAppPossible violation of WhatsApp's terms of service and policies
      More fun and convenience with your WhatsAppPossible need to update the mod frequently to keep up with the official app's updates
      -

      Conclusion

      -

      Blue WA Plus is a WhatsApp mod that offers many amazing features that are not available in the official app. You can download and install it on your Android device easily and enjoy using it. However, you should also be aware of the potential drawbacks and risks of using a modded app. Ultimately, the choice is yours whether you want to use Blue WA Plus or not. If you do decide to use it, we hope that this article has helped you understand how to download and install it, what are its features, and what are its pros and cons.

      -

      If you are interested in downloading Blue WA Plus, you can use this link to get the latest version of the mod. We hope that you have a great time using Blue WA Plus!

      -

      Frequently Asked Questions (FAQs)

      -

      Q: Is Blue WA Plus safe to use?

      -

      A: Blue WA Plus is generally safe to use as long as you download it from a trusted source and scan it for viruses before installing it. However, there is always a risk of malware or spyware when downloading any modded app from unknown sources. Therefore, you should use Blue WA Plus at your own discretion and responsibility.

      -

      Q: Is Blue WA Plus legal to use?

      -

      A: Blue WA Plus is not an official app from WhatsApp Inc., nor is it affiliated with them in any way. It is a third-party mod that modifies the original app's code and adds new features and functions. Therefore, it may violate WhatsApp's terms of service and policies, which could result in your account getting banned or suspended. Therefore, you should use Blue WA Plus at your own risk and respect WhatsApp's rules and regulations.

      -

      Q: Can I use Blue WA Plus with the official WhatsApp app?

      -

      A: No, you cannot use Blue WA Plus with the official WhatsApp app on the same device. You need to uninstall the official app before installing Blue WA Plus. Alternatively, you can use a different phone number for Blue WA Plus than the one you use for the official app.

      -

      Q: How can I update Blue WA Plus?

      -

      A: To update Blue WA Plus, you need to download the latest version of the mod from the same source where you downloaded it originally. You can also check for updates within the app by going to Settings > Updates. You need to uninstall the previous version of the mod before installing the new one. You can backup your chats before updating to avoid losing any data.

      -

      Q: How can I contact the developer of Blue WA Plus?

      -

      A: If you have any questions, suggestions, or feedback about Blue WA Plus, you can contact the developer of the mod by emailing him at bluwaplus@gmail.com. You can also follow him on Twitter @bluwaplus for the latest news and updates about the mod.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/mime/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/mime/README.md deleted file mode 100644 index 506fbe550a8dee9f0bde702fda6a040dfed3aba8..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/mime/README.md +++ /dev/null @@ -1,90 +0,0 @@ -# mime - -Comprehensive MIME type mapping API based on mime-db module. - -## Install - -Install with [npm](http://github.com/isaacs/npm): - - npm install mime - -## Contributing / Testing - - npm run test - -## Command Line - - mime [path_string] - -E.g. - - > mime scripts/jquery.js - application/javascript - -## API - Queries - -### mime.lookup(path) -Get the mime type associated with a file, if no mime type is found `application/octet-stream` is returned. Performs a case-insensitive lookup using the extension in `path` (the substring after the last '/' or '.'). E.g. - -```js -var mime = require('mime'); - -mime.lookup('/path/to/file.txt'); // => 'text/plain' -mime.lookup('file.txt'); // => 'text/plain' -mime.lookup('.TXT'); // => 'text/plain' -mime.lookup('htm'); // => 'text/html' -``` - -### mime.default_type -Sets the mime type returned when `mime.lookup` fails to find the extension searched for. (Default is `application/octet-stream`.) - -### mime.extension(type) -Get the default extension for `type` - -```js -mime.extension('text/html'); // => 'html' -mime.extension('application/octet-stream'); // => 'bin' -``` - -### mime.charsets.lookup() - -Map mime-type to charset - -```js -mime.charsets.lookup('text/plain'); // => 'UTF-8' -``` - -(The logic for charset lookups is pretty rudimentary. Feel free to suggest improvements.) - -## API - Defining Custom Types - -Custom type mappings can be added on a per-project basis via the following APIs. - -### mime.define() - -Add custom mime/extension mappings - -```js -mime.define({ - 'text/x-some-format': ['x-sf', 'x-sft', 'x-sfml'], - 'application/x-my-type': ['x-mt', 'x-mtt'], - // etc ... -}); - -mime.lookup('x-sft'); // => 'text/x-some-format' -``` - -The first entry in the extensions array is returned by `mime.extension()`. E.g. - -```js -mime.extension('text/x-some-format'); // => 'x-sf' -``` - -### mime.load(filepath) - -Load mappings from an Apache ".types" format file - -```js -mime.load('./my_project.types'); -``` -The .types file format is simple - See the `types` dir for examples. diff --git a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/img2img_app.py b/spaces/flatindo/generate2/diffusion_webui/diffusion_models/img2img_app.py deleted file mode 100644 index a85ee16eedf67ea8ce58374513f9e7a7a3843a39..0000000000000000000000000000000000000000 --- a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/img2img_app.py +++ /dev/null @@ -1,155 +0,0 @@ -import gradio as gr -import torch -from diffusers import StableDiffusionImg2ImgPipeline -from PIL import Image - -from diffusion_webui.utils.model_list import stable_model_list -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_MAPPING, - get_scheduler, -) - - -class StableDiffusionImage2ImageGenerator: - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path, scheduler): - if self.pipe is None or self.pipe.model_name != stable_model_path or self.pipe.scheduler_name != scheduler: - self.pipe = StableDiffusionImg2ImgPipeline.from_pretrained( - stable_model_path, safety_checker=None, torch_dtype=torch.float16 - ) - - self.pipe.model_name = stable_model_path - self.pipe.scheduler_name = scheduler - self.pipe = get_scheduler(pipe=self.pipe, scheduler=scheduler) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - def generate_image( - self, - image_path: str, - stable_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - scheduler: str, - guidance_scale: int, - num_inference_step: int, - seed_generator=0, - ): - pipe = self.load_model( - stable_model_path=stable_model_path, - scheduler=scheduler, - ) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - image = Image.open(image_path) - images = pipe( - prompt, - image=image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - generator=generator, - ).images - - return images - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - image2image_image_file = gr.Image( - type="filepath", label="Image" - ).style(height=260) - - image2image_prompt = gr.Textbox( - lines=1, - placeholder="Prompt", - show_label=False, - ) - - image2image_negative_prompt = gr.Textbox( - lines=1, - placeholder="Negative Prompt", - show_label=False, - ) - - with gr.Row(): - with gr.Column(): - image2image_model_path = gr.Dropdown( - choices=stable_model_list, - value=stable_model_list[0], - label="Stable Model Id", - ) - - image2image_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - image2image_num_inference_step = gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - with gr.Row(): - with gr.Column(): - image2image_scheduler = gr.Dropdown( - choices=list(SCHEDULER_MAPPING.keys()), - value=list(SCHEDULER_MAPPING.keys())[0], - label="Scheduler", - ) - image2image_num_images_per_prompt = gr.Slider( - minimum=1, - maximum=4, - step=1, - value=1, - label="Number Of Images", - ) - - image2image_seed_generator = gr.Slider( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed(0 for random)", - ) - - image2image_predict_button = gr.Button(value="Generator") - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - image2image_predict_button.click( - fn=StableDiffusionImage2ImageGenerator().generate_image, - inputs=[ - image2image_image_file, - image2image_model_path, - image2image_prompt, - image2image_negative_prompt, - image2image_num_images_per_prompt, - image2image_scheduler, - image2image_guidance_scale, - image2image_num_inference_step, - image2image_seed_generator, - ], - outputs=[output_image], - ) diff --git a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/inpaint_app.py b/spaces/flatindo/generate2/diffusion_webui/diffusion_models/inpaint_app.py deleted file mode 100644 index b4cc605e7824af011212b7957e186f459c76abe8..0000000000000000000000000000000000000000 --- a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/inpaint_app.py +++ /dev/null @@ -1,149 +0,0 @@ -import gradio as gr -import torch -from diffusers import DiffusionPipeline - -from diffusion_webui.utils.model_list import stable_inpiant_model_list - - -class StableDiffusionInpaintGenerator: - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path): - if self.pipe is None or self.pipe.model_name != stable_model_path: - self.pipe = DiffusionPipeline.from_pretrained( - stable_model_path, revision="fp16", torch_dtype=torch.float16 - ) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - self.pipe.model_name = stable_model_path - - - return self.pipe - - def generate_image( - self, - pil_image: str, - stable_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - seed_generator=0, - ): - image = pil_image["image"].convert("RGB").resize((512, 512)) - mask_image = pil_image["mask"].convert("RGB").resize((512, 512)) - pipe = self.load_model(stable_model_path) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - output = pipe( - prompt=prompt, - image=image, - mask_image=mask_image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - generator=generator, - ).images - - return output - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - stable_diffusion_inpaint_image_file = gr.Image( - source="upload", - tool="sketch", - elem_id="image_upload", - type="pil", - label="Upload", - ).style(height=260) - - stable_diffusion_inpaint_prompt = gr.Textbox( - lines=1, - placeholder="Prompt", - show_label=False, - ) - - stable_diffusion_inpaint_negative_prompt = gr.Textbox( - lines=1, - placeholder="Negative Prompt", - show_label=False, - ) - stable_diffusion_inpaint_model_id = gr.Dropdown( - choices=stable_inpiant_model_list, - value=stable_inpiant_model_list[0], - label="Inpaint Model Id", - ) - with gr.Row(): - with gr.Column(): - stable_diffusion_inpaint_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - - stable_diffusion_inpaint_num_inference_step = ( - gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - ) - - with gr.Row(): - with gr.Column(): - stable_diffusion_inpiant_num_images_per_prompt = gr.Slider( - minimum=1, - maximum=4, - step=1, - value=1, - label="Number Of Images", - ) - stable_diffusion_inpaint_seed_generator = ( - gr.Slider( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed(0 for random)", - ) - ) - - stable_diffusion_inpaint_predict = gr.Button( - value="Generator" - ) - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - stable_diffusion_inpaint_predict.click( - fn=StableDiffusionInpaintGenerator().generate_image, - inputs=[ - stable_diffusion_inpaint_image_file, - stable_diffusion_inpaint_model_id, - stable_diffusion_inpaint_prompt, - stable_diffusion_inpaint_negative_prompt, - stable_diffusion_inpiant_num_images_per_prompt, - stable_diffusion_inpaint_guidance_scale, - stable_diffusion_inpaint_num_inference_step, - stable_diffusion_inpaint_seed_generator, - ], - outputs=[output_image], - ) diff --git a/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/process_scryfall_data.py b/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/process_scryfall_data.py deleted file mode 100644 index 503a065c93e1f3babcb9ef63a6286dc5edb1fc1a..0000000000000000000000000000000000000000 --- a/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/process_scryfall_data.py +++ /dev/null @@ -1,86 +0,0 @@ -# %% -# download bulk data -import json -from pathlib import Path -import requests -import random -from tqdm import tqdm - -cards_data_path = Path("data/cards") - -# %% -# download info - -lookup_url = "https://api.scryfall.com/bulk-data" -bulk_requests_info = requests.get(lookup_url) -bulk_requests_info = bulk_requests_info.json() - -# %% -# download cards data - -oracl_card_info = [ - info for info in bulk_requests_info["data"] if info["type"] == "oracle_cards" -][0] -oracle_cards_url = oracl_card_info["download_uri"] -oracle_card_data = requests.get(oracle_cards_url) -oracle_card_data = oracle_card_data.json() - - -# %% -# download rulings - -rulings_info = [ - info for info in bulk_requests_info["data"] if info["type"] == "rulings" -][0] -rulings_info_url = rulings_info["download_uri"] -rulings_data = requests.get(rulings_info_url) -rulings_data = rulings_data.json() - - -# %% -# combine - -idx_2_card_data = {card_data["oracle_id"]: card_data for card_data in oracle_card_data} - -for ruling in tqdm(rulings_data): - oracle_id = ruling["oracle_id"] - if "rulings" not in idx_2_card_data[oracle_id]: - idx_2_card_data[oracle_id]["rulings"] = [] - idx_2_card_data[oracle_id]["rulings"].append(ruling["comment"]) - - -# %% -# save all cards -BLOCKED_CARD_TYPES = ["Card", "Stickers", "Hero"] - -processed_data = [ - card - for card in idx_2_card_data.values() - if (card["type_line"] not in BLOCKED_CARD_TYPES) and (card["layout"] == "normal") -] -print(f"saving cards data with {len(processed_data)} cards") -all_cards_file = Path("data/cards/scryfall_all_cards_with_rulings.json") - -with all_cards_file.open("w", encoding="utf-8") as outfile: - json.dump(processed_data, outfile) - - -# %% -# save sample - -all_cards_sample_file = Path("data/cards/scryfall_all_cards_with_rulings_sample.json") -sample = random.choices( - [card for card in processed_data if card.get("rulings", [])], k=10 -) -with all_cards_sample_file.open("w", encoding="utf-8") as outfile: - json.dump(sample, outfile) - -# %% - -print( - "there are ", - len([c for c in processed_data if c.get("rulings")]), - "cards with rulings", -) - -# %% diff --git a/spaces/freddyaboulton/dataset-viewer/README.md b/spaces/freddyaboulton/dataset-viewer/README.md deleted file mode 100644 index b088c77894bafe318821d72d621d029ca08b4dd6..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/dataset-viewer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dataset Viewer -emoji: ⚡ -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/freshield/ChatGPT-gradio/app.py b/spaces/freshield/ChatGPT-gradio/app.py deleted file mode 100644 index 7ba78ce1d8ad041870717243246d2ab193b02f9b..0000000000000000000000000000000000000000 --- a/spaces/freshield/ChatGPT-gradio/app.py +++ /dev/null @@ -1,76 +0,0 @@ -# coding=utf-8 -""" -@Author: Freshield -@Contact: yangyufresh@163.com -@File: server_simple.py -@Time: 2023-03-09 22:39 -@Last_update: 2023-03-09 22:39 -@Desc: None -@==============================================@ -@ _____ _ _ _ _ @ -@ | __|___ ___ ___| |_|_|___| |_| | @ -@ | __| _| -_|_ -| | | -_| | . | @ -@ |__| |_| |___|___|_|_|_|___|_|___| @ -@ Freshield @ -@==============================================@ -""" -import gradio as gr -from lib.OpenaiBot import OpenaiBot - -openai_bot = OpenaiBot() - - -def ask_chatGPT(openai_key, role, new_msg, state): - """向chatGPT提问""" - res_content = '对不起,服务器出错了,请稍后再试。' - res = [(new_msg, res_content)] - try: - openai_bot.set_api_key(openai_key) - res_content = openai_bot.get_response(role, new_msg, state) - res = [(new_msg, res_content)] - except Exception as e: - print(e) - finally: - state += res - res = state - - return res, state - - -def clean_question(question): - """清除问题""" - return '' - - -if __name__ == '__main__': - with gr.Blocks(title="尝试chatGPT对话", css="#maxheight {max-height: 500px} ") as demo: - state = gr.State([]) - with gr.Column(variant='panel'): - # title - with gr.Row(): - gr.Markdown("## 尝试chatGPT对话") - with gr.Row(): - # left part - with gr.Column(): - openai_key = gr.Textbox( - label='openai_key', placeholder='输入你openai的api key', type='password') - role_b = gr.Textbox( - label='请输入你设定的chatGPT的角色', lines=2, - value='你是ChatGPT,OpenAI训练的大规模语言模型,简明的回答用户的问题。') - question_b = gr.Textbox( - label='请输入你想要问的问题', - placeholder='输入你想提问的内容...', - lines=3 - ) - with gr.Row(): - greet_btn = gr.Button('提交', variant="primary") - # right part - with gr.Column(): - answer_b = gr.Chatbot( - label='chatGPT的问答', value=[(None, '请在这里提问')], elem_id='maxheight') - - greet_btn.click(fn=ask_chatGPT, inputs=[openai_key, role_b, question_b, state], outputs=[answer_b, state]) - greet_btn.click(fn=clean_question, inputs=[question_b], outputs=[question_b]) - - demo.launch() - demo.close() diff --git a/spaces/g4f/freegpt-webui/client/css/global.css b/spaces/g4f/freegpt-webui/client/css/global.css deleted file mode 100644 index d2c2d94eaeed9e630979ffbf36cad37cb08a945b..0000000000000000000000000000000000000000 --- a/spaces/g4f/freegpt-webui/client/css/global.css +++ /dev/null @@ -1,66 +0,0 @@ -@import url("https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500;600;700;800;900&display=swap"); -* { - --font-1: "Inter", sans-serif; - --section-gap: 24px; - --border-radius-1: 8px; - margin: 0; - padding: 0; - box-sizing: border-box; - position: relative; - font-family: var(--font-1); -} - -.theme-light { - --colour-1: #f5f5f5; - --colour-2: #000000; - --colour-3: #474747; - --colour-4: #949494; - --colour-5: #ebebeb; - --colour-6: #dadada; - - --accent: #3a3a3a; - --blur-bg: #ffffff; - --blur-border: #dbdbdb; - --user-input: #282828; - --conversations: #666666; -} - -.theme-dark { - --colour-1: #181818; - --colour-2: #ccc; - --colour-3: #dadada; - --colour-4: #f0f0f0; - --colour-5: #181818; - --colour-6: #242424; - - --accent: #151718; - --blur-bg: #242627; - --blur-border: #242627; - --user-input: #f5f5f5; - --conversations: #555555; -} - -html, -body { - background: var(--colour-1); - color: var(--colour-3); -} - -ol, -ul { - padding-left: 20px; -} - -.shown { - display: flex !important; -} - -a:-webkit-any-link { - color: var(--accent); -} - -@media screen and (max-height: 720px) { - :root { - --section-gap: 16px; - } -} diff --git a/spaces/gaouzief/b/app.py b/spaces/gaouzief/b/app.py deleted file mode 100644 index e2baf29247fdd75903697d71a498e9de137f37bc..0000000000000000000000000000000000000000 --- a/spaces/gaouzief/b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/bigscience/bloom").launch() \ No newline at end of file diff --git a/spaces/giswqs/solara-maxar/README.md b/spaces/giswqs/solara-maxar/README.md deleted file mode 100644 index 8d5ea48deebf519188501a0e931eef280a93e570..0000000000000000000000000000000000000000 --- a/spaces/giswqs/solara-maxar/README.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Solara Maxar -emoji: 🏃 -colorFrom: blue -colorTo: green -sdk: docker -pinned: false -license: mit -app_port: 8765 ---- - -## Introduction - -**A Solara Web App for Visualizing [Maxar Open Data](https://www.maxar.com/open-data)** - -- GitHub: -- Web App: -- Hugging Face: - -## Demo - -[![demo](https://img.youtube.com/vi/RBjZ5Ju09iU/0.jpg)](https://www.youtube.com/watch?v=RBjZ5Ju09iU) - -## Tutorial - -[![tutorial](https://img.youtube.com/vi/8t5M-EGR0sA/0.jpg)](https://www.youtube.com/watch?v=8t5M-EGR0sA) diff --git a/spaces/gotiQspiryo/whisper-ui/examples/4 images 1 mot pingouin les secrets et les curiosits des pingouins rvls.md b/spaces/gotiQspiryo/whisper-ui/examples/4 images 1 mot pingouin les secrets et les curiosits des pingouins rvls.md deleted file mode 100644 index e7a85ac765c27c2558676db27a4a8a83df8d0cd1..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/4 images 1 mot pingouin les secrets et les curiosits des pingouins rvls.md +++ /dev/null @@ -1,5 +0,0 @@ -
      -

      Copyright © 2022. freepng.fr Tous droits réservés.

      Découvrez ici les meilleures images, vecteurs et png transparents pour vos conceptions et projets d'artistes et de contributeurs talentueux du monde entier. Tout ce dont vous avez besoin n'est qu'à une recherche.

      -

      4 images 1 mot pingouin


      Download Zip https://urlgoal.com/2uyNb1



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Blood Money part 1 in hindi free download 1080p How Kunal uncovers the truth and fights for justice.md b/spaces/gotiQspiryo/whisper-ui/examples/Blood Money part 1 in hindi free download 1080p How Kunal uncovers the truth and fights for justice.md deleted file mode 100644 index f7eeeeb96a515504d57feaf728212ff05643bb4d..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Blood Money part 1 in hindi free download 1080p How Kunal uncovers the truth and fights for justice.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Blood Money part 1 in hindi free download 1080p


      Downloadhttps://urlgoal.com/2uyNyk



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/gradio/HuBERT/examples/roberta/commonsense_qa/commonsense_qa_task.py b/spaces/gradio/HuBERT/examples/roberta/commonsense_qa/commonsense_qa_task.py deleted file mode 100644 index 216093f7087a61060767babf5a3f3f4e716a4dfe..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/roberta/commonsense_qa/commonsense_qa_task.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import json -import os - -import numpy as np -import torch -from fairseq.data import ( - Dictionary, - IdDataset, - ListDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - RawLabelDataset, - RightPadDataset, - SortDataset, - data_utils, - encoders, -) -from fairseq.tasks import LegacyFairseqTask, register_task - - -@register_task("commonsense_qa") -class CommonsenseQATask(LegacyFairseqTask): - """Task to finetune RoBERTa for Commonsense QA.""" - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "data", metavar="DIR", help="path to data directory; we load .jsonl" - ) - parser.add_argument( - "--init-token", - type=int, - default=None, - help="add token at the beginning of each batch item", - ) - parser.add_argument("--num-classes", type=int, default=5) - - def __init__(self, args, vocab): - super().__init__(args) - self.vocab = vocab - self.mask = vocab.add_symbol("") - - self.bpe = encoders.build_bpe(args) - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("") - return dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - assert ( - args.criterion == "sentence_ranking" - ), "Must set --criterion=sentence_ranking" - - # load data and label dictionaries - vocab = cls.load_dictionary(os.path.join(args.data, "dict.txt")) - print("| dictionary: {} types".format(len(vocab))) - - return cls(args, vocab) - - def load_dataset( - self, split, epoch=1, combine=False, data_path=None, return_only=False, **kwargs - ): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - - def binarize(s, append_bos=False): - if self.bpe is not None: - s = self.bpe.encode(s) - tokens = self.vocab.encode_line( - s, - append_eos=True, - add_if_not_exist=False, - ).long() - if append_bos and self.args.init_token is not None: - tokens = torch.cat([tokens.new([self.args.init_token]), tokens]) - return tokens - - if data_path is None: - data_path = os.path.join(self.args.data, split + ".jsonl") - if not os.path.exists(data_path): - raise FileNotFoundError("Cannot find data: {}".format(data_path)) - - src_tokens = [[] for i in range(self.args.num_classes)] - src_lengths = [[] for i in range(self.args.num_classes)] - labels = [] - - with open(data_path) as h: - for line in h: - example = json.loads(line.strip()) - if "answerKey" in example: - label = ord(example["answerKey"]) - ord("A") - labels.append(label) - question = example["question"]["stem"] - assert len(example["question"]["choices"]) == self.args.num_classes - # format: ` Q: Where would I not want a fox? A: hen house ` - question = "Q: " + question - question_toks = binarize(question, append_bos=True) - for i, choice in enumerate(example["question"]["choices"]): - src = "A: " + choice["text"] - src_bin = torch.cat([question_toks, binarize(src)]) - src_tokens[i].append(src_bin) - src_lengths[i].append(len(src_bin)) - assert all( - len(src_tokens[0]) == len(src_tokens[i]) - for i in range(self.args.num_classes) - ) - assert len(src_tokens[0]) == len(src_lengths[0]) - assert len(labels) == 0 or len(labels) == len(src_tokens[0]) - - for i in range(self.args.num_classes): - src_lengths[i] = np.array(src_lengths[i]) - src_tokens[i] = ListDataset(src_tokens[i], src_lengths[i]) - src_lengths[i] = ListDataset(src_lengths[i]) - - dataset = { - "id": IdDataset(), - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens[0], reduce=True), - } - - for i in range(self.args.num_classes): - dataset.update( - { - "net_input{}".format(i + 1): { - "src_tokens": RightPadDataset( - src_tokens[i], - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": src_lengths[i], - } - } - ) - - if len(labels) > 0: - dataset.update({"target": RawLabelDataset(labels)}) - - dataset = NestedDictionaryDataset( - dataset, - sizes=[np.maximum.reduce([src_token.sizes for src_token in src_tokens])], - ) - - with data_utils.numpy_seed(self.args.seed): - dataset = SortDataset( - dataset, - # shuffle - sort_order=[np.random.permutation(len(dataset))], - ) - - print("| Loaded {} with {} samples".format(split, len(dataset))) - - self.datasets[split] = dataset - return self.datasets[split] - - def build_model(self, args): - from fairseq import models - - model = models.build_model(args, self) - - model.register_classification_head( - "sentence_classification_head", - num_classes=1, - ) - - return model - - @property - def source_dictionary(self): - return self.vocab - - @property - def target_dictionary(self): - return self.vocab diff --git a/spaces/gradio/HuBERT/fairseq/models/fconv_lm.py b/spaces/gradio/HuBERT/fairseq/models/fconv_lm.py deleted file mode 100644 index 07391eaa2908eacd2709176942d920c483c4f066..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/fconv_lm.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.models import ( - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.fconv import FConvDecoder - - -@register_model("fconv_lm") -class FConvLanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-layers", - type=str, - metavar="EXPR", - help="decoder layers [(dim, kernel_size), ...]", - ) - parser.add_argument( - "--decoder-out-embed-dim", - type=int, - metavar="N", - help="decoder output embedding dimension", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ) - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - parser.add_argument( - "--decoder-attention", - type=str, - metavar="EXPR", - help="decoder attention [True, ...]", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure all arguments are present in older models - base_lm_architecture(args) - - if hasattr(args, "max_target_positions") and not hasattr( - args, "tokens_per_sample" - ): - args.tokens_per_sample = args.max_target_positions - - decoder = FConvDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - convolutions=eval(args.decoder_layers), - out_embed_dim=args.decoder_embed_dim, - attention=eval(args.decoder_attention), - dropout=args.dropout, - max_positions=args.tokens_per_sample, - share_embed=False, - positional_embeddings=False, - adaptive_softmax_cutoff=( - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int) - if args.criterion == "adaptive_loss" - else None - ), - adaptive_softmax_dropout=args.adaptive_softmax_dropout, - ) - return FConvLanguageModel(decoder) - - -@register_model_architecture("fconv_lm", "fconv_lm") -def base_lm_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128) - args.decoder_layers = getattr(args, "decoder_layers", "[(1268, 4)] * 13") - args.decoder_attention = getattr(args, "decoder_attention", "False") - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - - -@register_model_architecture("fconv_lm", "fconv_lm_dauphin_wikitext103") -def fconv_lm_dauphin_wikitext103(args): - layers = "[(850, 6)] * 3" - layers += " + [(850, 1)] * 1" - layers += " + [(850, 5)] * 4" - layers += " + [(850, 1)] * 1" - layers += " + [(850, 4)] * 3" - layers += " + [(1024, 4)] * 1" - layers += " + [(2048, 4)] * 1" - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 280) - args.decoder_layers = getattr(args, "decoder_layers", layers) - args.decoder_attention = getattr(args, "decoder_attention", "False") - args.adaptive_softmax_cutoff = getattr( - args, "adaptive_softmax_cutoff", "10000,20000,200000" - ) - base_lm_architecture(args) - - -@register_model_architecture("fconv_lm", "fconv_lm_dauphin_gbw") -def fconv_lm_dauphin_gbw(args): - layers = "[(512, 5)]" - layers += " + [(128, 1, 0), (128, 5, 0), (512, 1, 3)] * 3" - layers += " + [(512, 1, 0), (512, 5, 0), (1024, 1, 3)] * 3" - layers += " + [(1024, 1, 0), (1024, 5, 0), (2048, 1, 3)] * 6" - layers += " + [(1024, 1, 0), (1024, 5, 0), (4096, 1, 3)]" - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128) - args.decoder_layers = getattr(args, "decoder_layers", layers) - args.decoder_attention = getattr(args, "decoder_attention", "False") - args.adaptive_softmax_cutoff = getattr( - args, "adaptive_softmax_cutoff", "10000,50000,200000" - ) - base_lm_architecture(args) diff --git a/spaces/gradio/HuBERT/fairseq/modules/dynamic_convolution.py b/spaces/gradio/HuBERT/fairseq/modules/dynamic_convolution.py deleted file mode 100644 index 0121d453b9e026f5128dd41fce691aa1b4486448..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/dynamic_convolution.py +++ /dev/null @@ -1,310 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout - -from .unfold import unfold1d - - -def DynamicConv( - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - renorm_padding=False, - bias=False, - conv_bias=False, - query_size=None, - in_proj=False, -): - if torch.cuda.is_available(): - try: - from fairseq.modules.dynamicconv_layer import DynamicconvLayer - - return DynamicconvLayer( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - renorm_padding=renorm_padding, - bias=bias, - conv_bias=conv_bias, - query_size=query_size, - ) - except ImportError as e: - print(e) - return DynamicConv1dTBC( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - renorm_padding=renorm_padding, - bias=bias, - conv_bias=conv_bias, - query_size=query_size, - ) - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -@with_incremental_state -class DynamicConv1dTBC(nn.Module): - """Dynamic lightweight convolution taking T x B x C inputs - Args: - input_size: # of channels of the input - kernel_size: convolution channels - padding_l: padding to the left when using "same" padding - num_heads: number of heads used. The weight is of shape (num_heads, 1, kernel_size) - weight_dropout: the drop rate of the DropConnect to drop the weight - weight_softmax: normalize the weight with softmax before the convolution - renorm_padding: re-normalize the filters to ignore the padded part (only the non-padding parts sum up to 1) - bias: use bias - conv_bias: bias of the convolution - query_size: specified when feeding a different input as the query - in_proj: project the input and generate the filter together - - Shape: - Input: TxBxC, i.e. (timesteps, batch_size, input_size) - Output: TxBxC, i.e. (timesteps, batch_size, input_size) - - Attributes: - weight: the learnable weights of the module of shape - `(num_heads, 1, kernel_size)` - bias: the learnable bias of the module of shape `(input_size)` - """ - - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - renorm_padding=False, - bias=False, - conv_bias=False, - query_size=None, - in_proj=False, - ): - super().__init__() - self.input_size = input_size - self.query_size = input_size if query_size is None else query_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.weight_softmax = weight_softmax - self.renorm_padding = renorm_padding - - if in_proj: - self.weight_linear = Linear( - self.input_size, self.input_size + num_heads * kernel_size * 1 - ) - else: - self.weight_linear = Linear( - self.query_size, num_heads * kernel_size * 1, bias=bias - ) - if conv_bias: - self.conv_bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.conv_bias = None - self.reset_parameters() - - @property - def in_proj(self): - return ( - self.weight_linear.out_features - == self.input_size + self.num_heads * self.kernel_size - ) - - def reset_parameters(self): - self.weight_linear.reset_parameters() - if self.conv_bias is not None: - nn.init.constant_(self.conv_bias, 0.0) - - def forward(self, x, incremental_state=None, query=None, unfold=None): - """Assuming the input, x, of the shape T x B x C and producing an output in the shape T x B x C - args: - x: Input of shape T x B x C, i.e. (timesteps, batch_size, input_size) - incremental_state: A dict to keep the state - unfold: unfold the input or not. If not, we use the matrix trick instead - query: use the specified query to predict the conv filters - """ - unfold = ( - x.size(0) > 512 if unfold is None else unfold - ) # use unfold mode as default for long sequence to save memory - unfold = unfold or (incremental_state is not None) - assert query is None or not self.in_proj - - if query is None: - query = x - if unfold: - output = self._forward_unfolded(x, incremental_state, query) - else: - output = self._forward_expanded(x, incremental_state, query) - - if self.conv_bias is not None: - output = output + self.conv_bias.view(1, 1, -1) - return output - - def _forward_unfolded(self, x, incremental_state, query): - """The conventional implementation of convolutions. - Unfolding the input by having a window shifting to the right.""" - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - if self.in_proj: - proj = self.weight_linear(x) - x = proj.narrow(2, 0, self.input_size).contiguous() - weight = ( - proj.narrow(2, self.input_size, H * K).contiguous().view(T * B * H, -1) - ) - else: - weight = self.weight_linear(query).view(T * B * H, -1) - - # renorm_padding is only implemented in _forward_expanded - assert not self.renorm_padding or incremental_state is not None - - if incremental_state is not None: - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - else: - padding_l = self.padding_l - if K > T and padding_l == K - 1: - weight = weight.narrow(1, K - T, T) - K, padding_l = T, T - 1 - # unfold the input: T x B x C --> T' x B x C x K - x_unfold = unfold1d(x, K, padding_l, 0) - x_unfold = x_unfold.view(T * B * H, R, K) - - if self.weight_softmax and not self.renorm_padding: - weight = F.softmax(weight, dim=1) - weight = weight.narrow(1, 0, K) - - if incremental_state is not None: - weight = weight[:, -x_unfold.size(2) :] - K = weight.size(1) - - if self.weight_softmax and self.renorm_padding: - weight = F.softmax(weight, dim=1) - - weight = self.weight_dropout_module(weight, inplace=False) - - output = torch.bmm(x_unfold, weight.unsqueeze(2)) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - def _forward_expanded(self, x, incremental_stat, query): - """Turn the convolution filters into band matrices and do matrix multiplication. - This is faster when the sequence is short, but less memory efficient. - This is not used in the decoder during inference. - """ - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - if self.in_proj: - proj = self.weight_linear(x) - x = proj.narrow(2, 0, self.input_size).contiguous() - weight = ( - proj.narrow(2, self.input_size, H * K).contiguous().view(T * B * H, -1) - ) - else: - weight = self.weight_linear(query).view(T * B * H, -1) - - if not self.renorm_padding: - if self.weight_softmax: - weight = F.softmax(weight, dim=1) - weight = self.weight_dropout_module(weight, inplace=False) - weight = weight.narrow(1, 0, K).contiguous() - weight = weight.view(T, B * H, K).transpose(0, 1) - - x = x.view(T, B * H, R).transpose(0, 1) - if self.weight_softmax and self.renorm_padding: - # turn the convolution filters into band matrices - weight_expanded = weight.new(B * H, T, T + K - 1).fill_(float("-inf")) - weight_expanded.as_strided( - (B * H, T, K), (T * (T + K - 1), T + K, 1) - ).copy_(weight) - weight_expanded = weight_expanded.narrow(2, self.padding_l, T) - # normalize the weight over valid positions like self-attention - weight_expanded = F.softmax(weight_expanded, dim=2) - weight_expanded = self.weight_dropout_module(weight_expanded, inplace=False) - else: - P = self.padding_l - # For efficiency, we cut the kernel size and reduce the padding when the kernel is larger than the length - if K > T and P == K - 1: - weight = weight.narrow(2, K - T, T) - K, P = T, T - 1 - # turn the convolution filters into band matrices - weight_expanded = weight.new_zeros(B * H, T, T + K - 1, requires_grad=False) - weight_expanded.as_strided( - (B * H, T, K), (T * (T + K - 1), T + K, 1) - ).copy_(weight) - weight_expanded = weight_expanded.narrow(2, P, T) # B*H x T x T - output = torch.bmm(weight_expanded, x) - output = output.transpose(0, 1).contiguous().view(T, B, C) - return output - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def extra_repr(self): - s = "{}, kernel_size={}, padding_l={}, num_heads={}, weight_softmax={}, conv_bias={}, renorm_padding={}, in_proj={}".format( - self.input_size, - self.kernel_size, - self.padding_l, - self.num_heads, - self.weight_softmax, - self.conv_bias is not None, - self.renorm_padding, - self.in_proj, - ) - - if self.query_size != self.input_size: - s += ", query_size={}".format(self.query_size) - if self.weight_dropout_module.p > 0.0: - s += ", weight_dropout={}".format(self.weight_dropout_module.p) - return s diff --git a/spaces/gradio/longformer/tvm/_ffi/_ctypes/vmobj.py b/spaces/gradio/longformer/tvm/_ffi/_ctypes/vmobj.py deleted file mode 100644 index 59930e55c382900c68691640be2b104d0bc8f75f..0000000000000000000000000000000000000000 --- a/spaces/gradio/longformer/tvm/_ffi/_ctypes/vmobj.py +++ /dev/null @@ -1,52 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -# pylint: disable=invalid-name -"""Runtime Object api""" -from __future__ import absolute_import - -import ctypes -from ..base import _LIB, check_call -from .types import TypeCode, RETURN_SWITCH - -ObjectHandle = ctypes.c_void_p - -"""Maps object type to its constructor""" -OBJECT_TYPE = {} - -def _register_object(index, cls): - """register object class""" - OBJECT_TYPE[index] = cls - - -def _return_object(x): - handle = x.v_handle - if not isinstance(handle, ObjectHandle): - handle = ObjectHandle(handle) - tag = ctypes.c_int() - check_call(_LIB.TVMGetObjectTag(handle, ctypes.byref(tag))) - cls = OBJECT_TYPE.get(tag.value, ObjectBase) - obj = cls(handle) - return obj - -RETURN_SWITCH[TypeCode.OBJECT_CELL] = _return_object - - -class ObjectBase(object): - __slots__ = ["handle"] - - def __init__(self, handle): - self.handle = handle diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/services/errorService.ts b/spaces/gsaivinay/Llama-2-13B-GGML-UI/services/errorService.ts deleted file mode 100644 index e22eb60b414ab375a71411ea7979c4c2a90d041e..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/services/errorService.ts +++ /dev/null @@ -1,35 +0,0 @@ -import { useMemo } from 'react'; - -import { useTranslation } from 'next-i18next'; - -import { ErrorMessage } from '@/types/error'; - -const useErrorService = () => { - const { t } = useTranslation('chat'); - - return { - getModelsError: useMemo( - () => (error: any) => { - return !error - ? null - : ({ - title: t('Error fetching models.'), - code: error.status || 'unknown', - messageLines: error.statusText - ? [error.statusText] - : [ - t( - 'Make sure your OpenAI API key is set in the bottom left of the sidebar.', - ), - t( - 'If you completed this step, OpenAI may be experiencing issues.', - ), - ], - } as ErrorMessage); - }, - [t], - ), - }; -}; - -export default useErrorService; diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/stylegan2/model.py b/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/stylegan2/model.py deleted file mode 100644 index ede4360148e260363887662bae7fe68c987ee60e..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/stylegan2/model.py +++ /dev/null @@ -1,674 +0,0 @@ -import math -import random -import torch -from torch import nn -from torch.nn import functional as F - -from .op.fused_act import FusedLeakyReLU, fused_leaky_relu -from .op.upfirdn2d import upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - return_features=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - elif return_features: - return image, out - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out diff --git a/spaces/h2oai/wave-tour/examples/plot_histogram.py b/spaces/h2oai/wave-tour/examples/plot_histogram.py deleted file mode 100644 index d0817ffe8547cc550641511a3ebe5a506276d573..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/plot_histogram.py +++ /dev/null @@ -1,24 +0,0 @@ -# Plot / Histogram -# Make a #histogram. #plot -# --- -from h2o_wave import site, data, ui - -page = site['/demo'] - -page.add('example', ui.plot_card( - box='1 1 4 5', - title='Histogram', - data=data('price low high', 8, rows=[ - (4, 50, 100), - (6, 100, 150), - (8, 150, 200), - (16, 350, 400), - (18, 400, 450), - (10, 200, 250), - (12, 250, 300), - (14, 300, 350), - ]), - plot=ui.plot([ui.mark(type='interval', y='=price', x1='=low', x2='=high', y_min=0)]) -)) - -page.save() diff --git a/spaces/hamzapehlivan/StyleRes/inference.sh b/spaces/hamzapehlivan/StyleRes/inference.sh deleted file mode 100644 index 0aad8ecb64ef26445649811b78291216ab19eb37..0000000000000000000000000000000000000000 --- a/spaces/hamzapehlivan/StyleRes/inference.sh +++ /dev/null @@ -1,8 +0,0 @@ -#Set GPU -export CUDA_VISIBLE_DEVICES='0' - -DATADIR='samples' -OUTDIR='results' -EDITCFG='options/editing_options/template.py' - -python inference.py --datadir=$DATADIR --outdir=$OUTDIR --edit_configs=$EDITCFG \ No newline at end of file diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/comm.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/comm.py deleted file mode 100644 index 9a5a69bd8005ff649329d5b8fb46b87ceac2b8ae..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/comm.py +++ /dev/null @@ -1,157 +0,0 @@ -""" -This file contains primitives for multi-gpu communication. -This is useful when doing distributed training. -""" - -import pickle -import time -import functools -import logging -import torch -import torch.distributed as dist -import numpy as np - - -def get_world_size(): - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank(): - if not dist.is_available(): - return 0 - if not dist.is_initialized(): - return 0 - return dist.get_rank() - - -def is_main_process(): - return get_rank() == 0 - - -def synchronize(): - """ - Helper function to synchronize (barrier) among all processes when - using distributed training - """ - if not dist.is_available(): - return - if not dist.is_initialized(): - return - world_size = dist.get_world_size() - if world_size == 1: - return - dist.barrier() - - -def all_gather(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to("cuda") - - # obtain Tensor size of each rank - local_size = torch.LongTensor([tensor.numel()]).to("cuda") - size_list = [torch.LongTensor([0]).to("cuda") for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.ByteTensor(size=(max_size,)).to("cuda")) - if local_size != max_size: - padding = torch.ByteTensor(size=(max_size - local_size,)).to("cuda") - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_dict(input_dict, average=True): - """ - Args: - input_dict (dict): all the values will be reduced - average (bool): whether to do average or sum - Reduce the values in the dictionary from all processes so that process with rank - 0 has the averaged results. Returns a dict with the same fields as - input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.reduce(values, dst=0) - if dist.get_rank() == 0 and average: - # only main process gets accumulated, so only divide by - # world_size in this case - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict - - -def broadcast_data(data): - if not torch.distributed.is_initialized(): - return data - rank = dist.get_rank() - if rank == 0: - data_tensor = torch.tensor(data + [0], device="cuda") - else: - data_tensor = torch.tensor(data + [1], device="cuda") - torch.distributed.broadcast(data_tensor, 0) - while data_tensor.cpu().numpy()[-1] == 1: - time.sleep(1) - - return data_tensor.cpu().numpy().tolist()[:-1] - - -def reduce_sum(tensor): - if get_world_size() <= 1: - return tensor - - tensor = tensor.clone() - dist.all_reduce(tensor, op=dist.ReduceOp.SUM) - return tensor - - -def shared_random_seed(): - """ - Returns: - int: a random number that is the same across all workers. - If workers need a shared RNG, they can use this shared seed to - create one. - - All workers must call this function, otherwise it will deadlock. - """ - ints = np.random.randint(2 ** 31) - all_ints = all_gather(ints) - return all_ints[0] \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/networks/backbone/resnext.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/networks/backbone/resnext.py deleted file mode 100644 index 96adb54146addc523be71591eb93afcc2c25307f..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/networks/backbone/resnext.py +++ /dev/null @@ -1,149 +0,0 @@ -#!/usr/bin/env python -# -*- encoding: utf-8 -*- - -""" -@Author : Peike Li -@Contact : peike.li@yahoo.com -@File : resnext.py.py -@Time : 8/11/19 8:58 PM -@Desc : -@License : This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. -""" -import functools -import torch.nn as nn -import math -from torch.utils.model_zoo import load_url - -from modules import InPlaceABNSync - -BatchNorm2d = functools.partial(InPlaceABNSync, activation='none') - -__all__ = ['ResNeXt', 'resnext101'] # support resnext 101 - -model_urls = { - 'resnext50': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnext50-imagenet.pth', - 'resnext101': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnext101-imagenet.pth' -} - - -def conv3x3(in_planes, out_planes, stride=1): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class GroupBottleneck(nn.Module): - expansion = 2 - - def __init__(self, inplanes, planes, stride=1, groups=1, downsample=None): - super(GroupBottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = BatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=1, groups=groups, bias=False) - self.bn2 = BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 2, kernel_size=1, bias=False) - self.bn3 = BatchNorm2d(planes * 2) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNeXt(nn.Module): - - def __init__(self, block, layers, groups=32, num_classes=1000): - self.inplanes = 128 - super(ResNeXt, self).__init__() - self.conv1 = conv3x3(3, 64, stride=2) - self.bn1 = BatchNorm2d(64) - self.relu1 = nn.ReLU(inplace=True) - self.conv2 = conv3x3(64, 64) - self.bn2 = BatchNorm2d(64) - self.relu2 = nn.ReLU(inplace=True) - self.conv3 = conv3x3(64, 128) - self.bn3 = BatchNorm2d(128) - self.relu3 = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.layer1 = self._make_layer(block, 128, layers[0], groups=groups) - self.layer2 = self._make_layer(block, 256, layers[1], stride=2, groups=groups) - self.layer3 = self._make_layer(block, 512, layers[2], stride=2, groups=groups) - self.layer4 = self._make_layer(block, 1024, layers[3], stride=2, groups=groups) - self.avgpool = nn.AvgPool2d(7, stride=1) - self.fc = nn.Linear(1024 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels // m.groups - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, stride=1, groups=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, groups, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=groups)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = x.view(x.size(0), -1) - x = self.fc(x) - - return x - - -def resnext101(pretrained=False, **kwargs): - """Constructs a ResNet-101 model. - Args: - pretrained (bool): If True, returns a model pre-trained on Places - """ - model = ResNeXt(GroupBottleneck, [3, 4, 23, 3], **kwargs) - if pretrained: - model.load_state_dict(load_url(model_urls['resnext101']), strict=False) - return model diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/README.zh-CN.md b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/README.zh-CN.md deleted file mode 100644 index d8b2a900bf963e0fbb6b733fc77fc504dddbf5ae..0000000000000000000000000000000000000000 --- a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/README.zh-CN.md +++ /dev/null @@ -1,490 +0,0 @@ -
      -

      - - -

      - -[英文](README.md)|[简体中文](README.zh-CN.md)
      - -
      - YOLOv5 CI - YOLOv5 Citation - Docker Pulls -
      - Run on Gradient - Open In Colab - Open In Kaggle -
      -
      - -YOLOv5 🚀 是世界上最受欢迎的视觉 AI,代表 Ultralytics 对未来视觉 AI 方法的开源研究,结合在数千小时的研究和开发中积累的经验教训和最佳实践。 - -我们希望这里的资源能帮助您充分利用 YOLOv5。请浏览 YOLOv5 文档 了解详细信息,在 GitHub 上提交问题以获得支持,并加入我们的 Discord 社区进行问题和讨论! - -如需申请企业许可,请在 [Ultralytics Licensing](https://ultralytics.com/license) 处填写表格 - -
      - - - - - - - - - - - - - - - - - - - - -
      -
      - -##
      YOLOv8 🚀 新品
      - -我们很高兴宣布 Ultralytics YOLOv8 🚀 的发布,这是我们新推出的领先水平、最先进的(SOTA)模型,发布于 **[https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics)**。 -YOLOv8 旨在快速、准确且易于使用,使其成为广泛的物体检测、图像分割和图像分类任务的极佳选择。 - -请查看 [YOLOv8 文档](https://docs.ultralytics.com)了解详细信息,并开始使用: - -[![PyPI 版本](https://badge.fury.io/py/ultralytics.svg)](https://badge.fury.io/py/ultralytics) [![下载量](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics) - -```commandline -pip install ultralytics -``` - -
      - - -
      - -##
      文档
      - -有关训练、测试和部署的完整文档见[YOLOv5 文档](https://docs.ultralytics.com)。请参阅下面的快速入门示例。 - -
      -安装 - -克隆 repo,并要求在 [**Python>=3.8.0**](https://www.python.org/) 环境中安装 [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) ,且要求 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) 。 - -```bash -git clone https://github.com/ultralytics/yolov5 # clone -cd yolov5 -pip install -r requirements.txt # install -``` - -
      - -
      -推理 - -使用 YOLOv5 [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading) 推理。最新 [模型](https://github.com/ultralytics/yolov5/tree/master/models) 将自动的从 -YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) 中下载。 - -```python -import torch - -# Model -model = torch.hub.load("ultralytics/yolov5", "yolov5s") # or yolov5n - yolov5x6, custom - -# Images -img = "https://ultralytics.com/images/zidane.jpg" # or file, Path, PIL, OpenCV, numpy, list - -# Inference -results = model(img) - -# Results -results.print() # or .show(), .save(), .crop(), .pandas(), etc. -``` - -
      - -
      -使用 detect.py 推理 - -`detect.py` 在各种来源上运行推理, [模型](https://github.com/ultralytics/yolov5/tree/master/models) 自动从 -最新的YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) 中下载,并将结果保存到 `runs/detect` 。 - -```bash -python detect.py --weights yolov5s.pt --source 0 # webcam - img.jpg # image - vid.mp4 # video - screen # screenshot - path/ # directory - list.txt # list of images - list.streams # list of streams - 'path/*.jpg' # glob - 'https://youtu.be/Zgi9g1ksQHc' # YouTube - 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream -``` - -
      - -
      -训练 - -下面的命令重现 YOLOv5 在 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) 数据集上的结果。 -最新的 [模型](https://github.com/ultralytics/yolov5/tree/master/models) 和 [数据集](https://github.com/ultralytics/yolov5/tree/master/data) -将自动的从 YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) 中下载。 -YOLOv5n/s/m/l/x 在 V100 GPU 的训练时间为 1/2/4/6/8 天( [多GPU](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training) 训练速度更快)。 -尽可能使用更大的 `--batch-size` ,或通过 `--batch-size -1` 实现 -YOLOv5 [自动批处理](https://github.com/ultralytics/yolov5/pull/5092) 。下方显示的 batchsize 适用于 V100-16GB。 - -```bash -python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128 - yolov5s 64 - yolov5m 40 - yolov5l 24 - yolov5x 16 -``` - - - -
      - -
      -教程 - -- [训练自定义数据](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data) 🚀 推荐 -- [获得最佳训练结果的技巧](https://docs.ultralytics.com/yolov5/tutorials/tips_for_best_training_results) ☘️ -- [多GPU训练](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training) -- [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading) 🌟 新 -- [TFLite,ONNX,CoreML,TensorRT导出](https://docs.ultralytics.com/yolov5/tutorials/model_export) 🚀 -- [NVIDIA Jetson平台部署](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano) 🌟 新 -- [测试时增强 (TTA)](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation) -- [模型集成](https://docs.ultralytics.com/yolov5/tutorials/model_ensembling) -- [模型剪枝/稀疏](https://docs.ultralytics.com/yolov5/tutorials/model_pruning_and_sparsity) -- [超参数进化](https://docs.ultralytics.com/yolov5/tutorials/hyperparameter_evolution) -- [冻结层的迁移学习](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers) -- [架构概述](https://docs.ultralytics.com/yolov5/tutorials/architecture_description) 🌟 新 -- [Roboflow用于数据集、标注和主动学习](https://docs.ultralytics.com/yolov5/tutorials/roboflow_datasets_integration) -- [ClearML日志记录](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration) 🌟 新 -- [使用Neural Magic的Deepsparse的YOLOv5](https://docs.ultralytics.com/yolov5/tutorials/neural_magic_pruning_quantization) 🌟 新 -- [Comet日志记录](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration) 🌟 新 - -
      - -##
      模块集成
      - -
      - - -
      -
      - -
      - - - - - - - - - - - -
      - -| Roboflow | ClearML ⭐ 新 | Comet ⭐ 新 | Neural Magic ⭐ 新 | -| :--------------------------------------------------------------------------------: | :-------------------------------------------------------------------------: | :--------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------: | -| 将您的自定义数据集进行标注并直接导出到 YOLOv5 以进行训练 [Roboflow](https://roboflow.com/?ref=ultralytics) | 自动跟踪、可视化甚至远程训练 YOLOv5 [ClearML](https://cutt.ly/yolov5-readme-clearml)(开源!) | 永远免费,[Comet](https://bit.ly/yolov5-readme-comet2)可让您保存 YOLOv5 模型、恢复训练以及交互式可视化和调试预测 | 使用 [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic),运行 YOLOv5 推理的速度最高可提高6倍 | - -##
      Ultralytics HUB
      - -[Ultralytics HUB](https://bit.ly/ultralytics_hub) 是我们的⭐**新的**用于可视化数据集、训练 YOLOv5 🚀 模型并以无缝体验部署到现实世界的无代码解决方案。现在开始 **免费** 使用他! - - - - -##
      为什么选择 YOLOv5
      - -YOLOv5 超级容易上手,简单易学。我们优先考虑现实世界的结果。 - -

      -
      - YOLOv5-P5 640 图 - -

      -
      -
      - 图表笔记 - -- **COCO AP val** 表示 mAP@0.5:0.95 指标,在 [COCO val2017](http://cocodataset.org) 数据集的 5000 张图像上测得, 图像包含 256 到 1536 各种推理大小。 -- **显卡推理速度** 为在 [COCO val2017](http://cocodataset.org) 数据集上的平均推理时间,使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100实例,batchsize 为 32 。 -- **EfficientDet** 数据来自 [google/automl](https://github.com/google/automl) , batchsize 为32。 -- **复现命令** 为 `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt` - -
      - -### 预训练模型 - -| 模型 | 尺寸
      (像素) | mAPval
      50-95 | mAPval
      50 | 推理速度
      CPU b1
      (ms) | 推理速度
      V100 b1
      (ms) | 速度
      V100 b32
      (ms) | 参数量
      (M) | FLOPs
      @640 (B) | -| ---------------------------------------------------------------------------------------------- | --------------- | -------------------- | ----------------- | --------------------------- | ---------------------------- | --------------------------- | --------------- | ---------------------- | -| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** | -| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 | -| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 | -| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 | -| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 | -| | | | | | | | | | -| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 | -| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 | -| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 | -| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 | -| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)
      +[TTA] | 1280
      1536 | 55.0
      **55.8** | 72.7
      **72.7** | 3136
      - | 26.2
      - | 19.4
      - | 140.7
      - | 209.8
      - | - -
      - 笔记 - -- 所有模型都使用默认配置,训练 300 epochs。n和s模型使用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) ,其他模型都使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml) 。 -- \*\*mAPval\*\*在单模型单尺度上计算,数据集使用 [COCO val2017](http://cocodataset.org) 。
      复现命令 `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` -- **推理速度**在 COCO val 图像总体时间上进行平均得到,测试环境使用[AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/)实例。 NMS 时间 (大约 1 ms/img) 不包括在内。
      复现命令 `python val.py --data coco.yaml --img 640 --task speed --batch 1` -- **TTA** [测试时数据增强](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation) 包括反射和尺度变换。
      复现命令 `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment` - -
      - -##
      实例分割模型 ⭐ 新
      - -我们新的 YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) 实例分割模型是世界上最快和最准确的模型,击败所有当前 [SOTA 基准](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco)。我们使它非常易于训练、验证和部署。更多细节请查看 [发行说明](https://github.com/ultralytics/yolov5/releases/v7.0) 或访问我们的 [YOLOv5 分割 Colab 笔记本](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) 以快速入门。 - -
      - 实例分割模型列表 - -
      - -
      - - -
      - -我们使用 A100 GPU 在 COCO 上以 640 图像大小训练了 300 epochs 得到 YOLOv5 分割模型。我们将所有模型导出到 ONNX FP32 以进行 CPU 速度测试,并导出到 TensorRT FP16 以进行 GPU 速度测试。为了便于再现,我们在 Google [Colab Pro](https://colab.research.google.com/signup) 上进行了所有速度测试。 - -| 模型 | 尺寸
      (像素) | mAPbox
      50-95 | mAPmask
      50-95 | 训练时长
      300 epochs
      A100 GPU(小时) | 推理速度
      ONNX CPU
      (ms) | 推理速度
      TRT A100
      (ms) | 参数量
      (M) | FLOPs
      @640 (B) | -| ------------------------------------------------------------------------------------------ | --------------- | -------------------- | --------------------- | --------------------------------------- | ----------------------------- | ----------------------------- | --------------- | ---------------------- | -| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** | -| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 | -| [YOLOv5m-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-seg.pt) | 640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 | -| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 | -| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 | - -- 所有模型使用 SGD 优化器训练, 都使用 `lr0=0.01` 和 `weight_decay=5e-5` 参数, 图像大小为 640 。
      训练 log 可以查看 https://wandb.ai/glenn-jocher/YOLOv5_v70_official -- **准确性**结果都在 COCO 数据集上,使用单模型单尺度测试得到。
      复现命令 `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt` -- **推理速度**是使用 100 张图像推理时间进行平均得到,测试环境使用 [Colab Pro](https://colab.research.google.com/signup) 上 A100 高 RAM 实例。结果仅表示推理速度(NMS 每张图像增加约 1 毫秒)。
      复现命令 `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1` -- **模型转换**到 FP32 的 ONNX 和 FP16 的 TensorRT 脚本为 `export.py`.
      运行命令 `python export.py --weights yolov5s-seg.pt --include engine --device 0 --half` - -
      - -
      - 分割模型使用示例  Open In Colab - -### 训练 - -YOLOv5分割训练支持自动下载 COCO128-seg 分割数据集,用户仅需在启动指令中包含 `--data coco128-seg.yaml` 参数。 若要手动下载,使用命令 `bash data/scripts/get_coco.sh --train --val --segments`, 在下载完毕后,使用命令 `python train.py --data coco.yaml` 开启训练。 - -```bash -# 单 GPU -python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 - -# 多 GPU, DDP 模式 -python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3 -``` - -### 验证 - -在 COCO 数据集上验证 YOLOv5s-seg mask mAP: - -```bash -bash data/scripts/get_coco.sh --val --segments # 下载 COCO val segments 数据集 (780MB, 5000 images) -python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # 验证 -``` - -### 预测 - -使用预训练的 YOLOv5m-seg.pt 来预测 bus.jpg: - -```bash -python segment/predict.py --weights yolov5m-seg.pt --source data/images/bus.jpg -``` - -```python -model = torch.hub.load( - "ultralytics/yolov5", "custom", "yolov5m-seg.pt" -) # 从load from PyTorch Hub 加载模型 (WARNING: 推理暂未支持) -``` - -| ![zidane](https://user-images.githubusercontent.com/26833433/203113421-decef4c4-183d-4a0a-a6c2-6435b33bc5d3.jpg) | ![bus](https://user-images.githubusercontent.com/26833433/203113416-11fe0025-69f7-4874-a0a6-65d0bfe2999a.jpg) | -| ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | - -### 模型导出 - -将 YOLOv5s-seg 模型导出到 ONNX 和 TensorRT: - -```bash -python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0 -``` - -
      - -##
      分类网络 ⭐ 新
      - -YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) 带来对分类模型训练、验证和部署的支持!详情请查看 [发行说明](https://github.com/ultralytics/yolov5/releases/v6.2) 或访问我们的 [YOLOv5 分类 Colab 笔记本](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) 以快速入门。 - -
      - 分类网络模型 - -
      - -我们使用 4xA100 实例在 ImageNet 上训练了 90 个 epochs 得到 YOLOv5-cls 分类模型,我们训练了 ResNet 和 EfficientNet 模型以及相同的默认训练设置以进行比较。我们将所有模型导出到 ONNX FP32 以进行 CPU 速度测试,并导出到 TensorRT FP16 以进行 GPU 速度测试。为了便于重现,我们在 Google 上进行了所有速度测试 [Colab Pro](https://colab.research.google.com/signup) 。 - -| 模型 | 尺寸
      (像素) | acc
      top1 | acc
      top5 | 训练时长
      90 epochs
      4xA100(小时) | 推理速度
      ONNX CPU
      (ms) | 推理速度
      TensorRT V100
      (ms) | 参数
      (M) | FLOPs
      @640 (B) | -| -------------------------------------------------------------------------------------------------- | --------------- | ---------------- | ---------------- | ------------------------------------ | ----------------------------- | ---------------------------------- | -------------- | ---------------------- | -| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** | -| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 | -| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 | -| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 | -| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 | -| | | | | | | | | | -| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 | -| [Resnetzch](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 | -| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 | -| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 | -| | | | | | | | | | -| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 | -| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 | -| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 | -| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 | - -
      - Table Notes (点击以展开) - -- 所有模型都使用 SGD 优化器训练 90 个 epochs,都使用 `lr0=0.001` 和 `weight_decay=5e-5` 参数, 图像大小为 224 ,且都使用默认设置。
      训练 log 可以查看 https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2 -- **准确性**都在单模型单尺度上计算,数据集使用 [ImageNet-1k](https://www.image-net.org/index.php) 。
      复现命令 `python classify/val.py --data ../datasets/imagenet --img 224` -- **推理速度**是使用 100 个推理图像进行平均得到,测试环境使用谷歌 [Colab Pro](https://colab.research.google.com/signup) V100 高 RAM 实例。
      复现命令 `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1` -- **模型导出**到 FP32 的 ONNX 和 FP16 的 TensorRT 使用 `export.py` 。
      复现命令 `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224` -
      -
      - -
      - 分类训练示例  Open In Colab - -### 训练 - -YOLOv5 分类训练支持自动下载 MNIST、Fashion-MNIST、CIFAR10、CIFAR100、Imagenette、Imagewoof 和 ImageNet 数据集,命令中使用 `--data` 即可。 MNIST 示例 `--data mnist` 。 - -```bash -# 单 GPU -python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128 - -# 多 GPU, DDP 模式 -python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3 -``` - -### 验证 - -在 ImageNet-1k 数据集上验证 YOLOv5m-cls 的准确性: - -```bash -bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images) -python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate -``` - -### 预测 - -使用预训练的 YOLOv5s-cls.pt 来预测 bus.jpg: - -```bash -python classify/predict.py --weights yolov5s-cls.pt --source data/images/bus.jpg -``` - -```python -model = torch.hub.load( - "ultralytics/yolov5", "custom", "yolov5s-cls.pt" -) # load from PyTorch Hub -``` - -### 模型导出 - -将一组经过训练的 YOLOv5s-cls、ResNet 和 EfficientNet 模型导出到 ONNX 和 TensorRT: - -```bash -python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224 -``` - -
      - -##
      环境
      - -使用下面我们经过验证的环境,在几秒钟内开始使用 YOLOv5 。单击下面的图标了解详细信息。 - -
      - - - - - - - - - - - - - - - - - -
      - -##
      贡献
      - -我们喜欢您的意见或建议!我们希望尽可能简单和透明地为 YOLOv5 做出贡献。请看我们的 [投稿指南](https://docs.ultralytics.com/help/contributing/),并填写 [YOLOv5调查](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) 向我们发送您的体验反馈。感谢我们所有的贡献者! - - - - - - -##
      许可证
      - -Ultralytics 提供两种许可证选项以适应各种使用场景: - -- **AGPL-3.0 许可证**:这个[OSI 批准](https://opensource.org/licenses/)的开源许可证非常适合学生和爱好者,可以推动开放的协作和知识分享。请查看[LICENSE](https://github.com/ultralytics/yolov5/blob/master/LICENSE) 文件以了解更多细节。 -- **企业许可证**:专为商业用途设计,该许可证允许将 Ultralytics 的软件和 AI 模型无缝集成到商业产品和服务中,从而绕过 AGPL-3.0 的开源要求。如果您的场景涉及将我们的解决方案嵌入到商业产品中,请通过 [Ultralytics Licensing](https://ultralytics.com/license)与我们联系。 - -##
      联系方式
      - -对于 Ultralytics 的错误报告和功能请求,请访问 [GitHub Issues](https://github.com/ultralytics/yolov5/issues),并加入我们的 [Discord](https://ultralytics.com/discord) 社区进行问题和讨论! - -
      -
      - - - - - - - - - - - - - - - - - - - - -
      - -[tta]: https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation diff --git a/spaces/hdhzk/bingo/src/app/layout.tsx b/spaces/hdhzk/bingo/src/app/layout.tsx deleted file mode 100644 index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000 --- a/spaces/hdhzk/bingo/src/app/layout.tsx +++ /dev/null @@ -1,47 +0,0 @@ -import { Metadata } from 'next' -import { Toaster } from 'react-hot-toast' -import { TailwindIndicator } from '@/components/tailwind-indicator' -import { Providers } from '@/components/providers' -import { Header } from '@/components/header' - -import '@/app/globals.scss' - - -export const metadata: Metadata = { - title: { - default: 'Bing AI Chatbot', - template: `%s - Bing AI Chatbot` - }, - description: 'Bing AI Chatbot Web App.', - themeColor: [ - { media: '(prefers-color-scheme: light)', color: 'white' }, - { media: '(prefers-color-scheme: dark)', color: 'dark' } - ], - icons: { - icon: '/favicon.ico', - shortcut: '../assets/images/logo.svg', - apple: '../assets/images/logo.svg' - } -} - -interface RootLayoutProps { - children: React.ReactNode -} - -export default function RootLayout({ children }: RootLayoutProps) { - return ( - - - - -
      - {/* @ts-ignore */} -
      -
      {children}
      -
      - -
      - - - ) -} diff --git a/spaces/heiyubili/bingo/src/components/chat.tsx b/spaces/heiyubili/bingo/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
      - -
      - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
      - -
      - ) : null} - - ) : null} -
      - - -
      - ) -} diff --git a/spaces/hekbobo/bingo/src/components/ui/codeblock.tsx b/spaces/hekbobo/bingo/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/hekbobo/bingo/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
      -
      - {language} -
      - - -
      -
      - - {value} - -
      - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/.md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/.md deleted file mode 100644 index 95da130a06fb39eca941a8baeaf194ea4c540f28..0000000000000000000000000000000000000000 --- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/.md +++ /dev/null @@ -1,60 +0,0 @@ -## ESNO - Visionary 2012 - - - - - - - - - -**Download File >>>>> [https://bionallopi.blogspot.com/?file=2txmim](https://bionallopi.blogspot.com/?file=2txmim)** - - - - - - - - - - - - - -# ESNO - Visionary 2012: A New Album by the Rising Star of Electronic Music - - - -ESNO, the Japanese producer and singer who has been making waves in the electronic music scene, has released his debut album, Visionary 2012. The album features 12 tracks that showcase his unique blend of synth-pop, electro-funk, and future bass. ESNO's music is inspired by his love for anime, video games, and sci-fi, as well as his personal experiences and emotions. - - - -Some of the highlights of the album include the opening track, "Visionary", which sets the tone with its catchy melody and uplifting lyrics; "Starlight", a collaboration with vocaloid Hatsune Miku that expresses his admiration for her; "Reboot", a fast-paced and energetic song that reflects his desire to start anew; and "Dreamscape", a soothing and dreamy track that closes the album with a sense of hope. - - - -ESNO - Visionary 2012 is available on various streaming platforms and digital stores. Fans can also purchase a limited edition CD that comes with a booklet and a poster. ESNO has also announced that he will be performing live at several venues across Japan and Asia in the coming months. For more information, visit his official website and social media accounts. - - - -ESNO - Visionary 2012 is not only a showcase of his musical talent, but also a tribute to his influences and inspirations. ESNO has cited artists such as Daft Punk, Porter Robinson, and Madeon as some of his role models in the electronic music genre. He has also expressed his admiration for anime and video game composers such as Yoko Kanno, Nobuo Uematsu, and Yuki Kajiura. He said that he hopes to create music that can touch people's hearts and make them feel happy. - - - -ESNO started his musical career in 2010, when he uploaded his first song, "Rainbow", on YouTube. The song gained positive feedback and encouraged him to continue making music. He then joined the online community of vocaloid producers, where he learned how to use the software and create songs with vocaloid singers. He gained more popularity and recognition with his vocaloid songs, such as "Snowflake", "Lunar", and "Mirage". He also participated in various vocaloid events and contests, where he met other producers and singers. - - - -In 2011, he decided to expand his musical horizons and started singing in his own songs. He also experimented with different genres and styles of electronic music, such as electro-house, dubstep, and glitch-hop. He released several singles and EPs, such as "Spark", "Glitch", and "Synapse". He also collaborated with other artists, such as Miku-tan, Kanae Asaba, and DJ'TEKINA//SOMETHING. He said that he enjoys working with other musicians and learning from them. - - - -In 2012, he announced that he was working on his first full-length album, Visionary 2012. He said that the album was his most ambitious and personal project to date, and that it represented his vision of the future. He said that he wanted to share his music with the world and inspire others to pursue their dreams. He also said that he was grateful for the support of his fans and friends, who motivated him to keep making music. - - 1b8d091108 - - - - - diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Vacbi-A320-Free-Download-5.md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Vacbi-A320-Free-Download-5.md deleted file mode 100644 index f1071fb181048c7f346cc1a73f4c161daa7ffdd2..0000000000000000000000000000000000000000 --- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Vacbi-A320-Free-Download-5.md +++ /dev/null @@ -1,70 +0,0 @@ -## vacbi a320 free download 5 - - - - - - ![Vacbi A320 Free Download 5](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRFWLIwC3CU1XqbuIwGb-87NocP06m_s4rF8cssFS9PE5HzhIHvX8mCK18) - - - - - -**LINK === [https://www.google.com/url?q=https%3A%2F%2Ftlniurl.com%2F2txRUN&sa=D&sntz=1&usg=AOvVaw38N\_XbvNPN5RWCs1694GLH](https://www.google.com/url?q=https%3A%2F%2Ftlniurl.com%2F2txRUN&sa=D&sntz=1&usg=AOvVaw38N\_XbvNPN5RWCs1694GLH)** - - - - - - - - - - - - Here is a possible title and article with HTML formatting for the keyword "vacbi a320 free download 5": - -# How to Download VACBI A320 for Free - - - -VACBI A320 is a video and computer based instruction program that teaches pilots how to operate the Airbus A320 family of aircraft. The program covers various topics such as systems, procedures, performance, and emergencies. VACBI A320 is a valuable tool for both initial and recurrent training of A320 pilots. - - - -However, VACBI A320 is not easy to find online, as it is usually distributed by airlines or training centers to their pilots. If you are looking for a way to download VACBI A320 for free, you may have to resort to some unconventional methods. Here are some possible options: - - - -- Search for torrent files or magnet links of VACBI A320 on peer-to-peer networks such as BitTorrent or uTorrent. You may need a VPN service to access these networks and avoid legal issues. Be careful of malware and viruses that may be hidden in the files. - -- Look for online forums or groups where pilots share their training materials and resources. You may have to register and participate in these communities to gain access to their files. Be respectful and follow their rules and etiquette. - -- Contact someone who has access to VACBI A320 and ask them to share it with you. This may be a friend, a colleague, or a stranger on the internet. Be polite and offer something in return, such as another training program or a donation. - - - -These are some of the possible ways to download VACBI A320 for free. However, we do not endorse or recommend any of these methods, as they may be illegal, unethical, or risky. The best way to obtain VACBI A320 is to purchase it from an authorized source or enroll in a reputable training course. This way, you can ensure that you get the most updated and accurate version of the program, as well as support the developers and instructors who created it. - -Here is a possible continuation of the article with HTML formatting for the keyword "vacbi a320 free download 5": - -VACBI A320 has many benefits for pilots who want to learn or refresh their knowledge of the A320 family. Some of these benefits are: - - - -- VACBI A320 is interactive and engaging, as it uses videos, animations, graphics, and quizzes to explain the concepts and procedures. - -- VACBI A320 is flexible and convenient, as it can be accessed from any computer with a DVD drive and a compatible software. Pilots can study at their own pace and time, and repeat the lessons as many times as they want. - -- VACBI A320 is comprehensive and updated, as it covers all the systems and components of the A320 family, including the latest modifications and enhancements. Pilots can learn about the normal and abnormal operations of the aircraft, as well as the best practices and tips. - - - -VACBI A320 is a proven and trusted program that has been used by thousands of pilots around the world. It is designed by Airbus experts and instructors, and follows the official Airbus syllabus and standards. VACBI A320 is recognized and approved by many aviation authorities and airlines as a valid training tool. - - dfd1c89656 - - - - - diff --git a/spaces/hkunlp/Binder/resources/demo_description.md b/spaces/hkunlp/Binder/resources/demo_description.md deleted file mode 100644 index d54dc7fee6eda0ecdde4448c6cecad9ab3159c5f..0000000000000000000000000000000000000000 --- a/spaces/hkunlp/Binder/resources/demo_description.md +++ /dev/null @@ -1,10 +0,0 @@ -This is an interactive demo of [Binder](https://lm-code-binder.github.io/) based on GPT3 Codex. -You can input a question(may not be answered by pure SQL/Python), -The demo will generate a Binder program (e.g., SQL/Python bound with API calls to GPT3 Codex), -and then execute the Binder program (API calls by prompting Codex and SQL/Python by using SQL/Python interpreters) to drive the final answer. - -*For more details, check out the [project website](https://lm-code-binder.github.io/).* - -**Note:** -- The demo might be slow when (high) concurrent requests occur because of the query limit of OpenAI GPT3 Codex. -- To speed up the demonstration, we only employed a simper version of Binder (e.g., no majority vote), please check the [paper](https://arxiv.org/abs/2210.02875) and [code](https://github.com/HKUNLP/Binder) for more detail. \ No newline at end of file diff --git a/spaces/hugging-fellows/paper-to-pokemon/README.md b/spaces/hugging-fellows/paper-to-pokemon/README.md deleted file mode 100644 index a0fec866f58cc0d5f9e535c7a03e55d407162750..0000000000000000000000000000000000000000 --- a/spaces/hugging-fellows/paper-to-pokemon/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Paper to Pokémon -emoji: 🗒️➡️🐹 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -duplicated_from: lambdalabs/text-to-pokemon ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/huggingface/Model_Cards_Writing_Tool/pages/9_\360\237\223\214_Citation.py" "b/spaces/huggingface/Model_Cards_Writing_Tool/pages/9_\360\237\223\214_Citation.py" deleted file mode 100644 index dccf881f810fe764f35c987ac53a15d22a07b2ee..0000000000000000000000000000000000000000 --- "a/spaces/huggingface/Model_Cards_Writing_Tool/pages/9_\360\237\223\214_Citation.py" +++ /dev/null @@ -1,48 +0,0 @@ -import streamlit as st -from persist import persist, load_widget_state -from pathlib import Path - - - -global variable_output - -def main(): - cs_body() - -def cs_body(): - - st.markdown('# Citation') - st.write("If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section") - left, right = st.columns([2,4]) - - #st.markdown('### Model Description') - - - with left: - st.write("\n") - st.write("\n") - st.markdown('### BibTeX:') - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.write("\n") - st.markdown('### APA:') - - - with right: - - st.text_area("", key=persist("bibtex_citation")) - st.text_area("", key=persist("APA_citation")) - #st.write("\n") - - - - - - - -if __name__ == '__main__': - load_widget_state() - main() \ No newline at end of file diff --git a/spaces/hzwluoye/gpt4/client/css/typing.css b/spaces/hzwluoye/gpt4/client/css/typing.css deleted file mode 100644 index f998ebe7f2172e4ac23cdeff6ba6fd811b67a145..0000000000000000000000000000000000000000 --- a/spaces/hzwluoye/gpt4/client/css/typing.css +++ /dev/null @@ -1,15 +0,0 @@ -.typing { - position: absolute; - top: -25px; - left: 0; - font-size: 14px; - animation: show_popup 0.4s; -} - -.typing-hiding { - animation: hide_popup 0.4s; -} - -.typing-hidden { - display: none; -} diff --git a/spaces/ibm-nasa-geospatial/Prithvi-100M-sen1floods11-demo/README.md b/spaces/ibm-nasa-geospatial/Prithvi-100M-sen1floods11-demo/README.md deleted file mode 100644 index ed888bf2d4c095d29a86df2c2606820e97343c30..0000000000000000000000000000000000000000 --- a/spaces/ibm-nasa-geospatial/Prithvi-100M-sen1floods11-demo/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Prithvi 100M Sen1floods11 -emoji: 👁 -colorFrom: indigo -colorTo: purple -sdk: docker -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/inamXcontru/PoeticTTS/Dinner and a Movie Albany NY The Madison Theatre Offers In-Seat Ordering Live Shows and Unlimited Movie Memberships.md b/spaces/inamXcontru/PoeticTTS/Dinner and a Movie Albany NY The Madison Theatre Offers In-Seat Ordering Live Shows and Unlimited Movie Memberships.md deleted file mode 100644 index 4aa8ee6494a7f184371a23d98ddd22e2d5210021..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Dinner and a Movie Albany NY The Madison Theatre Offers In-Seat Ordering Live Shows and Unlimited Movie Memberships.md +++ /dev/null @@ -1,22 +0,0 @@ - -

      • Tableside service by our friendly wait staff. No buffet lines.
      • Groups and tour buses welcome (over 75 groups each season)
      • The smallest, most intimate professional dinner theatre in America.
      • A choice of four delicious entrees (medallions of beef, oven-baked chicken breast (gluten free), broiled atlantic salmon (gluten free), vegetarian pasta) served with salad, rolls, vegetables, and dessert with coffee or hot tea. Bar beverages extra.
      • Price includes tax and gratuity
      • Air conditioning
      • Sound reinforcement throughout the theatre
      • Wheelchair access
      • Plenty of free parking
      • A magnificent view of the Adirondacks and the Queen of American Lakes
      • A terrific date night or family event (kids 15 and up)
      For more information, visit their website or call the Box Office at 518-668-5762 x411.

      -

      A night at the movies just got a lot more fun. Celebrate the revival of a longtime Pine Hills neighborhood favorite with this Deal at The Madison Theatre, good for movie admission for 2 people and $20 toward dinner at the theatre. It’s a $42 value, yours for $21.

      -

      dinner and a movie albany ny


      Download ✒ ✒ ✒ https://gohhs.com/2uz5TH



      -

      This will only take a second!","number_decimals":"2","is_test_mode":"","give_version":"2.21.4","magnific_options":"main_class":"give-modal","close_on_bg_click":false,"form_translation":"payment-mode":"Please select payment mode.","give_first":"Please enter your first name.","give_last":"Please enter your last name.","give_email":"Please enter a valid email address.","give_user_login":"Invalid email address or username.","give_user_pass":"Enter a password.","give_user_pass_confirm":"Enter the password confirmation.","give_agree_to_terms":"You must agree to the terms and conditions.","confirm_email_sent_message":"Please check your email and click on the link to access your complete donation history.","ajax_vars":"ajaxurl":"https:\/\/erbenorgan.org\/wp-admin\/admin-ajax.php","ajaxNonce":"e39763a90a","loading":"Loading","select_option":"Please select an option","default_gateway":"paypal","permalinks":"1","number_decimals":2,"cookie_hash":"f461a9e0efa2e66ff11597988ef9ebc1","session_nonce_cookie_name":"wp-give_session_reset_nonce_f461a9e0efa2e66ff11597988ef9ebc1","session_cookie_name":"wp-give_session_f461a9e0efa2e66ff11597988ef9ebc1","delete_session_nonce_cookie":"0"};var giveApiSettings = "root":"https:\/\/erbenorgan.org\/wp-json\/give-api\/v2\/","rest_base":"give-api\/v2";/* ]]> *//* */ body.custom-background background-color: #f4f4f4; (function(w,d,s,l,i))(window,document,'script','dataLayer','GTM-KW8DV6V');Friends of the Erben OrganHome1868 Henry Erben OrganAbout UsDonateEducationNewsEventsHistoryContact

    A movie & dinner fundraiser for the Erben from across the country Updated: March 26, 2020 by Lana Krakovskiy

    -

    About The Event: Enjoy a screening of one of Hollywood's latest DVD releases! Prisoners is a drama starring Hugh Jackman, Jake Gyllenhaal, Viola Davis, Melissa Leo, Paul Dano, Maria Bello, and Terrence Howard. The Friends of the Library will provide moviegoers with two slices of pizza, a snack, and a can of soda/bottle of water.

    -

    About The Film: In the rural New England town of Brockton, Massachusetts, neighbors and friends the Dovers and the Birches gather for Thanksgiving dinner, but by the end of the night, their celebration turns to panic when the families' two youngest daughters go missing. As the hours pass and the girls don't turn up, it becomes terrifyingly apparent they've been kidnapped. After the cops fail to find them, Keller Dover (Hugh Jackman) decides to take the law into his own hands, running up against dedicated Detective Loki (Jake Gyllenhaal). But even as Loki is diligently working against the clock to solve the case, Keller will stop at nothing to get their prime suspect to talk before it's too late. Rated R for disturbing violent content including torture, and language throughout. 153 minutes.

    -

    After a rough two years during-and-post pandemic, people are finally getting back to movie theaters. It's been good to have big summer blockbusters lighting up the screens and electrifying audiences again.

    -

    There were serious doubts as to whether movie theaters would make it out of the pandemic with in-home competition and lockdown restrictions, but now theaters are starting to see the light at the end of the tunnel. To celebrate a great summer and thank moviegoers, there will be an unprecedented nationwide "thank you."

    -

    -

    Nation's biggest chains joining forces with local and independent movie houses to say thank you by lowering ticket prices to $3 this Saturday, September 3. This herculean coming together was orchestrated, or should I say directed, by The Cinema Foundation, and even includes movies in IMAX and other premium formats.

    -

    Easy - any movie playing on Saturday! You could see any film currently running in theaters, like Top Gun: Maverick, the latest Minions, or Bullet Train. There are also special offerings for National Cinema Day.

    -

    For any Marvel fans, the $3 weekend also coincides with the theatrical re-release of Spider-Man: No Way Home, which originally came out in December. If you love the classics, some theaters will be screening Jaws (very topical on Long Island this summer, I hear) and the 1958 monster movie, The Blob.

    -

    Albany NY - A portion of an upcoming WWII movie was filmed on the USS Slater DE 766 Former Battleship, docked in Albany. Filming for 'Orion in Midsummer' began aboard the USS Slater on August 17 and continued for nine days through August 29, 2008.

    -

    Tokyo based production company Destiny, Inc. filmed the movie in conjunction with Marcom Visual Creation, Inc. of NYC. The story is set in the Pacific Ocean near the end of the second World War. The storyline encompasses the final days of the war and the struggle between Japanse submarines and American destroyers. The USS Slater was chosen for the role because the ship remains the last Destroyer Escort afloat in the US. Other portions of the movie was filmed in Japan and other locations.

    -

    Well, good news. Looks like the Madison Theatre will indeed be reopening but as expected it won't be the same Madison that the community has come to know and love. However, it does seem like it will still offer a unique movie going experience just with less of a retro experience and more of a modern touch.

    -

    The Madison is expected to get taken over by Cosmic Cinemas. According to the TU the plan is for the four screens to get upgraded with state of the art digital projection, add in new rocker seats, show primarily new films, all while offering from scratch meals and high end cocktails while you watch. The new theater is going to cater to the adult moviegoer claiming to be

    -

    Other upcoming activities you will find posted here on our website. Examples of past events: Annual Fall Bazaar, many church dinners, rummage sales, bible studies, Strawberry Festivals, family movie nights, covered dishes, lighted churches, talent shows, and many others.

    -

    Whether you have spent the day exploring the iconic Vermont State House or you have walked for hours around Hubbard Park, you are likely going to want to grab dinner in town. This post discusses the best dinner options in Montpelier as well as a few recommended activities to get into before or after your meal.

    -

    Our menu features sweet crepes, such as the Blueberry Frumple Cake made with fresh Vermont blueberry compote, as well as savory crepes, like the Chicken Cheesy Pesto, which is filled with roasted chicken, pesto, caramelized onions, Vermont cheddar and mozzarella cheeses. Did you know any of our crepes can be made with our gluten-free/vegan buckwheat batter? We also have some flavorful sandwiches, burgers and sides to choose from. Additionally, our menu features some delicious local craft beer and cocktails that are sure to impress. A few other Montpelier dinner staples from our menu include:

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/inflaton/learn-ai/app_modules/llm_chat_chain.py b/spaces/inflaton/learn-ai/app_modules/llm_chat_chain.py deleted file mode 100644 index 08a556afc3ac34067271a4be922572f75afc55c7..0000000000000000000000000000000000000000 --- a/spaces/inflaton/learn-ai/app_modules/llm_chat_chain.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -from typing import List, Optional - -from langchain import ConversationChain, PromptTemplate -from langchain.chains.base import Chain -from langchain.memory import ConversationSummaryBufferMemory - -from app_modules.llm_inference import LLMInference - - -def get_llama_2_prompt_template(): - B_INST, E_INST = "[INST]", "[/INST]" - B_SYS, E_SYS = "<>\n", "\n<>\n\n" - - instruction = "Chat History:\n\n{history} \n\nUser: {input}" - system_prompt = "You are a helpful assistant, you always only answer for the assistant then you stop. Read the chat history to get context" - # system_prompt = """\ - # You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. \n\nDo not output any emotional expression. Read the chat history to get context.\ - # """ - - SYSTEM_PROMPT = B_SYS + system_prompt + E_SYS - prompt_template = B_INST + SYSTEM_PROMPT + instruction + E_INST - return prompt_template - - -class ChatChain(LLMInference): - def __init__(self, llm_loader): - super().__init__(llm_loader) - - def create_chain(self) -> Chain: - template = ( - get_llama_2_prompt_template() - if os.environ.get("USE_LLAMA_2_PROMPT_TEMPLATE") == "true" - else """You are a chatbot having a conversation with a human. -{history} -Human: {input} -Chatbot:""" - ) - - print(f"template: {template}") - - prompt = PromptTemplate(input_variables=["history", "input"], template=template) - - memory = ConversationSummaryBufferMemory( - llm=self.llm_loader.llm, max_token_limit=1024, return_messages=True - ) - - llm_chain = ConversationChain( - llm=self.llm_loader.llm, - prompt=prompt, - verbose=True, - memory=memory, - ) - - return llm_chain - - def run_chain(self, chain, inputs, callbacks: Optional[List] = []): - return chain({"input": inputs["question"]}, callbacks) diff --git a/spaces/innnky/nyaru-svc2.0-advanced/attentions.py b/spaces/innnky/nyaru-svc2.0-advanced/attentions.py deleted file mode 100644 index ab8e176a26b0d009c3a38683aa168110079f32fd..0000000000000000000000000000000000000000 --- a/spaces/innnky/nyaru-svc2.0-advanced/attentions.py +++ /dev/null @@ -1,311 +0,0 @@ -import math - -import torch -from torch import nn -from torch.nn import functional as t_func - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, - **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, - window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., - proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, - proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, - block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels ** -0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query / math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = t_func.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = t_func.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:, slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = t_func.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = t_func.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[:, :, :length, length - 1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = t_func.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])) - x_flat = x.view([batch, heads, length ** 2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = t_func.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, - causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = t_func.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = t_func.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/innnky/soft-vits-vc/commons.py b/spaces/innnky/soft-vits-vc/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/innnky/soft-vits-vc/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Barbie Princess Charm School Movie In Hindi 396.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Barbie Princess Charm School Movie In Hindi 396.md deleted file mode 100644 index 61e89bc679391d24d22cdcf65dea6b4ff65d44dc..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Barbie Princess Charm School Movie In Hindi 396.md +++ /dev/null @@ -1,16 +0,0 @@ -
    -

    Barbie Princess Charm School Movie In Hindi 396: A Review

    -

    Barbie Princess Charm School Movie In Hindi 396 is a dubbed version of the 2011 animated film Barbie: Princess Charm School. The film follows the story of Blair Willows, a kind-hearted girl who is selected to attend Princess Charm School, a magical academy that teaches girls how to behave like princesses. There, she meets her roommates Hadley and Isla, and learns that she is the long-lost princess of Gardania. However, she also faces the opposition of Delancy, the spoiled daughter of the headmistress, who wants to claim the throne for herself.

    -

    Barbie Princess Charm School Movie In Hindi 396


    Download >>>>> https://urlin.us/2uEyvi



    -

    The film is a fun and entertaining adventure for young girls who love Barbie and princesses. The animation is colorful and lively, and the voice acting is well-done. The film also has some positive messages about friendship, courage, and kindness. The Hindi dubbing is decent and does not affect the quality of the film too much.

    -

    Barbie Princess Charm School Movie In Hindi 396 is a good choice for fans of Barbie and fairy tales. It is available to watch online on various platforms. If you are looking for a charming and magical movie to enjoy with your family or friends, you might want to give this film a try.

    - -

    The film features many characters from the Barbie franchise, such as Barbie herself as Blair, Kelly Sheridan as Delancy, and Nicole Oliver as Miss Privet. The film also introduces some new characters, such as Prince Nicholas, Dame Devin, and Brock. The film has a mix of comedy, drama, and romance, as well as some musical numbers. The film has a runtime of 79 minutes and is rated G for general audiences.

    -

    The film was released on September 13, 2011 in the United States and Canada, and on October 28, 2011 in India. The film received mostly positive reviews from critics and audiences alike. The film was praised for its animation, voice acting, story, and messages. The film was also nominated for several awards, such as the Kids' Choice Awards India and the Young Artist Awards.

    -

    Barbie Princess Charm School Movie In Hindi 396 is one of the many Barbie films that have been dubbed in Hindi for the Indian market. Some of the other Barbie films that have been dubbed in Hindi are Barbie in A Mermaid Tale, Barbie: A Fashion Fairytale, Barbie: A Fairy Secret, and Barbie: The Princess and the Popstar. These films are popular among young girls who love Barbie and her adventures.

    -

    - -

    If you are interested in watching Barbie Princess Charm School Movie In Hindi 396, you can find it online on various platforms, such as YouTube, Dailymotion, and Netflix. You can also buy or rent the DVD or Blu-ray from Amazon or other online stores. The film is suitable for all ages and can be enjoyed by anyone who loves Barbie and princesses.

    -

    Barbie Princess Charm School Movie In Hindi 396 is a delightful and enchanting film that will make you smile and cheer. It is a film that celebrates the power of friendship, courage, and kindness. It is a film that shows that anyone can be a princess if they believe in themselves and follow their dreams. It is a film that you will not regret watching.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ben 10 Destroy All Aliens Movie In Hindi Download UPD.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ben 10 Destroy All Aliens Movie In Hindi Download UPD.md deleted file mode 100644 index 6b1b690772b3d063224bad825e66afffb6dde41c..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ben 10 Destroy All Aliens Movie In Hindi Download UPD.md +++ /dev/null @@ -1,6 +0,0 @@ -

    ben 10 destroy all aliens movie in hindi download


    Download Filehttps://urlin.us/2uEyKh



    - - 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Download T Racks 3 Deluxe Full Crack 11.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Download T Racks 3 Deluxe Full Crack 11.md deleted file mode 100644 index ff75766bcd50326234705d91c06790dcb1fe6091..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Download T Racks 3 Deluxe Full Crack 11.md +++ /dev/null @@ -1,18 +0,0 @@ -

    download t racks 3 deluxe full crack 11


    DOWNLOAD ✑ ✑ ✑ https://urlin.us/2uEvNf



    - -There are 3 ways to mix your audio. The first is the analog modeled approach. This is similar to the way a real tube mixer works. The second approach is the digital approach. This uses a PC platform to emulate an analog console and has the ability to emulate up to 16 inputs and 16 outputs. The third approach is the hybrid approach. This is a combination of analog and digital. The hybrid approach is the most powerful way to mix. You can create an analog effect from digital plug-ins, or a digital effect from analog plug-ins. This allows you to have your cake and eat it too. - -A massive 80GB hard drive stores over 30,000 presets, plus 2GB of sample memory for instant recall. What's more, the T-RackS 3 Deluxe also features powerful 32-bit floating point processing. This means you can mix with up to 24-bit resolution, with the potential for up to 192kHz sampling rates. - -The T-RackS 3 Deluxe has a new user interface that's as simple as it is powerful. It has a drag-and-drop function that allows you to move plug-ins around, arrange them, and resize them. It's just as simple to set-up. You can import, export, and transfer presets in and out of the T-RackS 3 Deluxe. The software also provides an extensive collection of song structure tools, with the ability to set up your own song structure templates. - -T-RackS 3 Deluxe also features the integrated T-RackS 3 utility. It can be used to create and edit multi-track audio, mix audio files, and create audio CDs. The T-RackS 3 utility also offers a 64-bit floating-point audio engine that's as powerful as it is flexible. - -With the T-RackS 3 Deluxe, you'll experience true professional mixing and mastering. A huge 80GB hard drive stores over 30,000 presets, plus 2GB of sample memory for instant recall. What's more, the T-RackS 3 Deluxe also features powerful 32-bit floating point processing. This means you can mix with up to 24-bit resolution, with the potential for up to 192kHz sampling rates. - -This product includes 9 analog modeled and digital effects processors, plus a powerful 32-bit floating point audio engine that is as powerful as it is flexible. - -There are 3 ways to mix your audio. The first is the analog modeled 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Movavi Video Editor Business 20.5.1 Crack Serial Key Keygen UPDATED.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Movavi Video Editor Business 20.5.1 Crack Serial Key Keygen UPDATED.md deleted file mode 100644 index ab41c134bb8be07e963f8cae860b72e13732790b..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Movavi Video Editor Business 20.5.1 Crack Serial Key Keygen UPDATED.md +++ /dev/null @@ -1,42 +0,0 @@ -

    Movavi Video Editor Business 20.5.1 Crack Serial Key Keygen


    Download Filehttps://urlin.us/2uEwZX



    - -Movavi Video Editor Business Crack is a multimedia software developed by Movavi. Movavi Video Editor is a powerful video editing application for windows. With this, you can edit photos, movies, videos, music, and more to create amazing visual effects. It provides the user to record, edit, play, and burn videos. Also, you can record videos in the high definition format. The program has some professional features that makes it one of the best and most popular video editor software available on the market. - -This software has a lot of features and amazing functionality. It is a multi-media software which helps you to create and edit videos, photos, and even audio files. This tool is an easy and simple to use software. So, you don’t have to be an expert to use it. - -Also, it can be used by both Windows and Mac users. This software is a professional solution. So, it helps you to create video clips from the images, videos, and sound files. This software is fully compatible with the latest version of Adobe CS 5 and CS 6. You can also use this software to compress and convert video to the other formats such as AVI, MOV, 3GP, and more. - -Key Features: - -It has amazing features which makes the work much easier and efficient. - -You can use the built-in video editor to record and edit video, photos, and more. - -It is an easy to use software that makes it simple to use. - -This software supports all the popular file formats such as MP4, MOV, AVI, MP3, 3GP, and more. - -The program gives you the option to use the additional codecs. - -You can even edit the audio of the file by adding new effects to it. - -This program has the ability to customize the background. - -It can be used by both Windows and Mac users. - -The user interface is so simple to use. - -You can also add text to the video and can use it for multimedia purposes. - -This software is a powerful video editor software which is fully compatible with the latest version of Adobe CS 5 and CS 6. - -It is a standalone application. - -It supports a wide range of file formats. - -How To Crack? - -First of all, download the setup from the given link. After downloading the setup, run the setup file by using the installation wizard. It will show the features of the software 4fefd39f24
    -
    -
    -

    diff --git a/spaces/j0hngou/vision-diffmask/code/models/classification.py b/spaces/j0hngou/vision-diffmask/code/models/classification.py deleted file mode 100644 index 17e22657aec65100d92177d51fc26c9e2e6cfc00..0000000000000000000000000000000000000000 --- a/spaces/j0hngou/vision-diffmask/code/models/classification.py +++ /dev/null @@ -1,112 +0,0 @@ -""" -Parts of this file have been adapted from -https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial15/Vision_Transformer.html -""" - -import pytorch_lightning as pl -import torch.nn.functional as F - -from argparse import ArgumentParser -from torch import Tensor -from torch.optim import AdamW, Optimizer, RAdam -from torch.optim.lr_scheduler import _LRScheduler -from transformers import get_scheduler, PreTrainedModel - - -class ImageClassificationNet(pl.LightningModule): - @staticmethod - def add_model_specific_args(parent_parser: ArgumentParser) -> ArgumentParser: - parser = parent_parser.add_argument_group("Classification Model") - parser.add_argument( - "--optimizer", - type=str, - default="AdamW", - choices=["AdamW", "RAdam"], - help="The optimizer to use to train the model.", - ) - parser.add_argument( - "--weight_decay", - type=float, - default=1e-2, - help="The optimizer's weight decay.", - ) - parser.add_argument( - "--lr", - type=float, - default=5e-5, - help="The initial learning rate for the model.", - ) - return parent_parser - - def __init__( - self, - model: PreTrainedModel, - num_train_steps: int, - optimizer: str = "AdamW", - weight_decay: float = 1e-2, - lr: float = 5e-5, - ): - """A PyTorch Lightning Module for a HuggingFace model used for image classification. - - Args: - model (PreTrainedModel): a pretrained model for image classification - num_train_steps (int): number of training steps - optimizer (str): optimizer to use - weight_decay (float): weight decay for optimizer - lr (float): the learning rate used for training - """ - super().__init__() - - # Save the hyperparameters and the model - self.save_hyperparameters(ignore=["model"]) - self.model = model - - def forward(self, x: Tensor) -> Tensor: - return self.model(x).logits - - def configure_optimizers(self) -> tuple[list[Optimizer], list[_LRScheduler]]: - # Set the optimizer class based on the hyperparameter - if self.hparams.optimizer == "AdamW": - optim_class = AdamW - elif self.hparams.optimizer == "RAdam": - optim_class = RAdam - else: - raise Exception(f"Unknown optimizer {self.hparams.optimizer}") - - # Create the optimizer and the learning rate scheduler - optimizer = optim_class( - self.parameters(), - weight_decay=self.hparams.weight_decay, - lr=self.hparams.lr, - ) - lr_scheduler = get_scheduler( - name="linear", - optimizer=optimizer, - num_warmup_steps=0, - num_training_steps=self.hparams.num_train_steps, - ) - - return [optimizer], [lr_scheduler] - - def _calculate_loss(self, batch: tuple[Tensor, Tensor], mode: str) -> Tensor: - imgs, labels = batch - - preds = self.model(imgs).logits - loss = F.cross_entropy(preds, labels) - acc = (preds.argmax(dim=-1) == labels).float().mean() - - self.log(f"{mode}_loss", loss) - self.log(f"{mode}_acc", acc) - - return loss - - def training_step(self, batch: tuple[Tensor, Tensor], _: Tensor) -> Tensor: - loss = self._calculate_loss(batch, mode="train") - - return loss - - def validation_step(self, batch: tuple[Tensor, Tensor], _: Tensor): - self._calculate_loss(batch, mode="val") - - def test_step(self, batch: tuple[Tensor, Tensor], _: Tensor): - self._calculate_loss(batch, mode="test") diff --git a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/render.py b/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/render.py deleted file mode 100644 index 7a3d141f3e00216b530d05c205c5f94f0ad814ab..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/render.py +++ /dev/null @@ -1,507 +0,0 @@ -import os -import json -import pandas as pd -import cv2 -import numpy as np -from PIL import Image, ImageOps -from .rich import console - -from .generate import generate -from .noise import add_noise -from .animation import sample_from_cv2, sample_to_cv2, anim_frame_warp -from .animation_key_frames import DeformAnimKeys, LooperAnimKeys -from .video_audio_utilities import get_frame_name, get_next_frame -from .depth import DepthModel -from .colors import maintain_colors -from .parseq_adapter import ParseqAnimKeys -from .seed import next_seed -from .blank_frame_reroll import blank_frame_reroll -from .image_sharpening import unsharp_mask -from .load_images import get_mask, load_img, get_mask_from_file -from .hybrid_video import hybrid_generation, hybrid_composite -from .hybrid_video import get_matrix_for_hybrid_motion, get_matrix_for_hybrid_motion_prev, get_flow_for_hybrid_motion, get_flow_for_hybrid_motion_prev, image_transform_ransac, image_transform_optical_flow -from .save_images import save_image -from .composable_masks import compose_mask_with_check -from .settings import get_keys_to_exclude -from .deforum_controlnet import unpack_controlnet_vids, is_controlnet_enabled -# Webui -from modules.shared import opts, cmd_opts, state, sd_model -from modules import lowvram, devices, sd_hijack - -def render_animation(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root): - # handle hybrid video generation - if anim_args.animation_mode in ['2D','3D']: - if anim_args.hybrid_composite or anim_args.hybrid_motion in ['Affine', 'Perspective', 'Optical Flow']: - args, anim_args, inputfiles = hybrid_generation(args, anim_args, root) - # path required by hybrid functions, even if hybrid_comp_save_extra_frames is False - hybrid_frame_path = os.path.join(args.outdir, 'hybridframes') - - # handle controlnet video input frames generation - if is_controlnet_enabled(controlnet_args): - unpack_controlnet_vids(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root) - - # use parseq if manifest is provided - use_parseq = parseq_args.parseq_manifest != None and parseq_args.parseq_manifest.strip() - # expand key frame strings to values - keys = DeformAnimKeys(anim_args) if not use_parseq else ParseqAnimKeys(parseq_args, anim_args) - loopSchedulesAndData = LooperAnimKeys(loop_args, anim_args) - # resume animation - start_frame = 0 - if anim_args.resume_from_timestring: - for tmp in os.listdir(args.outdir): - if ".txt" in tmp : - pass - else: - filename = tmp.split("_") - # don't use saved depth maps to count number of frames - if anim_args.resume_timestring in filename and "depth" not in filename: - start_frame += 1 - #start_frame = start_frame - 1 - - # create output folder for the batch - os.makedirs(args.outdir, exist_ok=True) - print(f"Saving animation frames to:\n{args.outdir}") - - # save settings for the batch - exclude_keys = get_keys_to_exclude('general') - settings_filename = os.path.join(args.outdir, f"{args.timestring}_settings.txt") - with open(settings_filename, "w+", encoding="utf-8") as f: - args.__dict__["prompts"] = animation_prompts - s = {} - for d in [dict(args.__dict__), dict(anim_args.__dict__), dict(parseq_args.__dict__), dict(loop_args.__dict__)]: - for key, value in d.items(): - if key not in exclude_keys: - s[key] = value - json.dump(s, f, ensure_ascii=False, indent=4) - - # resume from timestring - if anim_args.resume_from_timestring: - args.timestring = anim_args.resume_timestring - - # Always enable pseudo-3d with parseq. No need for an extra toggle: - # Whether it's used or not in practice is defined by the schedules - if use_parseq: - anim_args.flip_2d_perspective = True - - # expand prompts out to per-frame - if use_parseq: - prompt_series = keys.prompts - else: - prompt_series = pd.Series([np.nan for a in range(anim_args.max_frames)]) - for i, prompt in animation_prompts.items(): - prompt_series[int(i)] = prompt - prompt_series = prompt_series.ffill().bfill() - - # check for video inits - using_vid_init = anim_args.animation_mode == 'Video Input' - - # load depth model for 3D - predict_depths = (anim_args.animation_mode == '3D' and anim_args.use_depth_warping) or anim_args.save_depth_maps - predict_depths = predict_depths or (anim_args.hybrid_composite and anim_args.hybrid_comp_mask_type in ['Depth','Video Depth']) - if predict_depths: - depth_model = DepthModel('cpu' if cmd_opts.lowvram or cmd_opts.medvram else root.device) - depth_model.load_midas(root.models_path, root.half_precision) - if anim_args.midas_weight < 1.0: - depth_model.load_adabins(root.models_path) - # depth-based hybrid composite mask requires saved depth maps - if anim_args.hybrid_composite and anim_args.hybrid_comp_mask_type =='Depth': - anim_args.save_depth_maps = True - else: - depth_model = None - anim_args.save_depth_maps = False - - # state for interpolating between diffusion steps - turbo_steps = 1 if using_vid_init else int(anim_args.diffusion_cadence) - turbo_prev_image, turbo_prev_frame_idx = None, 0 - turbo_next_image, turbo_next_frame_idx = None, 0 - - # resume animation - prev_img = None - color_match_sample = None - if anim_args.resume_from_timestring: - last_frame = start_frame-1 - if turbo_steps > 1: - last_frame -= last_frame%turbo_steps - path = os.path.join(args.outdir,f"{args.timestring}_{last_frame:05}.png") - img = cv2.imread(path) - #img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Changed the colors on resume - prev_img = img - if anim_args.color_coherence != 'None': - color_match_sample = img - if turbo_steps > 1: - turbo_next_image, turbo_next_frame_idx = prev_img, last_frame - turbo_prev_image, turbo_prev_frame_idx = turbo_next_image, turbo_next_frame_idx - start_frame = last_frame+turbo_steps - - args.n_samples = 1 - frame_idx = start_frame - - # reset the mask vals as they are overwritten in the compose_mask algorithm - mask_vals = {} - noise_mask_vals = {} - - mask_vals['everywhere'] = Image.new('1', (args.W, args.H), 1) - noise_mask_vals['everywhere'] = Image.new('1', (args.W, args.H), 1) - - mask_image = None - - if args.use_init and args.init_image != None and args.init_image != '': - _, mask_image = load_img(args.init_image, - shape=(args.W, args.H), - use_alpha_as_mask=args.use_alpha_as_mask) - mask_vals['init_mask'] = mask_image - noise_mask_vals['init_mask'] = mask_image - - # Grab the first frame masks since they wont be provided until next frame - if mask_image is None and args.use_mask: - mask_vals['init_mask'] = get_mask(args) - noise_mask_vals['init_mask'] = get_mask(args) # TODO?: add a different default noise mask - - if anim_args.use_mask_video: - mask_vals['video_mask'] = get_mask_from_file(get_next_frame(args.outdir, anim_args.video_mask_path, frame_idx, True), args) - noise_mask_vals['video_mask'] = get_mask_from_file(get_next_frame(args.outdir, anim_args.video_mask_path, frame_idx, True), args) - else: - mask_vals['video_mask'] = None - noise_mask_vals['video_mask'] = None - - #Webui - state.job_count = anim_args.max_frames - - while frame_idx < anim_args.max_frames: - #Webui - state.job = f"frame {frame_idx + 1}/{anim_args.max_frames}" - state.job_no = frame_idx + 1 - if state.interrupted: - break - - print(f"\033[36mAnimation frame: \033[0m{frame_idx}/{anim_args.max_frames} ") - - noise = keys.noise_schedule_series[frame_idx] - strength = keys.strength_schedule_series[frame_idx] - scale = keys.cfg_scale_schedule_series[frame_idx] - contrast = keys.contrast_schedule_series[frame_idx] - kernel = int(keys.kernel_schedule_series[frame_idx]) - sigma = keys.sigma_schedule_series[frame_idx] - amount = keys.amount_schedule_series[frame_idx] - threshold = keys.threshold_schedule_series[frame_idx] - hybrid_comp_schedules = { - "alpha": keys.hybrid_comp_alpha_schedule_series[frame_idx], - "mask_blend_alpha": keys.hybrid_comp_mask_blend_alpha_schedule_series[frame_idx], - "mask_contrast": keys.hybrid_comp_mask_contrast_schedule_series[frame_idx], - "mask_auto_contrast_cutoff_low": int(keys.hybrid_comp_mask_auto_contrast_cutoff_low_schedule_series[frame_idx]), - "mask_auto_contrast_cutoff_high": int(keys.hybrid_comp_mask_auto_contrast_cutoff_high_schedule_series[frame_idx]), - } - scheduled_sampler_name = None - scheduled_clipskip = None - mask_seq = None - noise_mask_seq = None - if anim_args.enable_steps_scheduling and keys.steps_schedule_series[frame_idx] is not None: - args.steps = int(keys.steps_schedule_series[frame_idx]) - if anim_args.enable_sampler_scheduling and keys.sampler_schedule_series[frame_idx] is not None: - scheduled_sampler_name = keys.sampler_schedule_series[frame_idx].casefold() - if anim_args.enable_clipskip_scheduling and keys.clipskip_schedule_series[frame_idx] is not None: - scheduled_clipskip = int(keys.clipskip_schedule_series[frame_idx]) - if args.use_mask and keys.mask_schedule_series[frame_idx] is not None: - mask_seq = keys.mask_schedule_series[frame_idx] - if anim_args.use_noise_mask and keys.noise_mask_schedule_series[frame_idx] is not None: - noise_mask_seq = keys.noise_mask_schedule_series[frame_idx] - - if args.use_mask and not anim_args.use_noise_mask: - noise_mask_seq = mask_seq - - depth = None - - if anim_args.animation_mode == '3D' and (cmd_opts.lowvram or cmd_opts.medvram): - # Unload the main checkpoint and load the depth model - lowvram.send_everything_to_cpu() - sd_hijack.model_hijack.undo_hijack(sd_model) - devices.torch_gc() - depth_model.to(root.device) - - # emit in-between frames - if turbo_steps > 1: - tween_frame_start_idx = max(0, frame_idx-turbo_steps) - for tween_frame_idx in range(tween_frame_start_idx, frame_idx): - tween = float(tween_frame_idx - tween_frame_start_idx + 1) / float(frame_idx - tween_frame_start_idx) - print(f" Creating in-between frame: {tween_frame_idx}; tween:{tween:0.2f};") - - advance_prev = turbo_prev_image is not None and tween_frame_idx > turbo_prev_frame_idx - advance_next = tween_frame_idx > turbo_next_frame_idx - - if depth_model is not None: - assert(turbo_next_image is not None) - depth = depth_model.predict(turbo_next_image, anim_args, root.half_precision) - - if advance_prev: - turbo_prev_image, _ = anim_frame_warp(turbo_prev_image, args, anim_args, keys, tween_frame_idx, depth_model, depth=depth, device=root.device, half_precision=root.half_precision) - if advance_next: - turbo_next_image, _ = anim_frame_warp(turbo_next_image, args, anim_args, keys, tween_frame_idx, depth_model, depth=depth, device=root.device, half_precision=root.half_precision) - - # hybrid video motion - warps turbo_prev_image or turbo_next_image to match motion - if tween_frame_idx > 0: - if anim_args.hybrid_motion in ['Affine', 'Perspective']: - if anim_args.hybrid_motion_use_prev_img: - if advance_prev: - matrix = get_matrix_for_hybrid_motion_prev(tween_frame_idx, (args.W, args.H), inputfiles, turbo_prev_image, anim_args.hybrid_motion) - turbo_prev_image = image_transform_ransac(turbo_prev_image, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if advance_next: - matrix = get_matrix_for_hybrid_motion_prev(tween_frame_idx, (args.W, args.H), inputfiles, turbo_next_image, anim_args.hybrid_motion) - turbo_next_image = image_transform_ransac(turbo_next_image, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - else: - matrix = get_matrix_for_hybrid_motion(tween_frame_idx-1, (args.W, args.H), inputfiles, anim_args.hybrid_motion) - if advance_prev: - turbo_prev_image = image_transform_ransac(turbo_prev_image, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if advance_next: - turbo_next_image = image_transform_ransac(turbo_next_image, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if anim_args.hybrid_motion in ['Optical Flow']: - if anim_args.hybrid_motion_use_prev_img: - if advance_prev: - flow = get_flow_for_hybrid_motion_prev(tween_frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, turbo_prev_image, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - turbo_prev_image = image_transform_optical_flow(turbo_prev_image, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if advance_next: - flow = get_flow_for_hybrid_motion_prev(tween_frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, turbo_next_image, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - turbo_next_image = image_transform_optical_flow(turbo_next_image, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - else: - flow = get_flow_for_hybrid_motion(tween_frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - if advance_prev: - turbo_prev_image = image_transform_optical_flow(turbo_prev_image, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if advance_next: - turbo_next_image = image_transform_optical_flow(turbo_next_image, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - - turbo_prev_frame_idx = turbo_next_frame_idx = tween_frame_idx - - if turbo_prev_image is not None and tween < 1.0: - img = turbo_prev_image*(1.0-tween) + turbo_next_image*tween - else: - img = turbo_next_image - - # intercept and override to grayscale - if anim_args.color_force_grayscale: - img = cv2.cvtColor(img.astype(np.uint8), cv2.COLOR_BGR2GRAY) - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - filename = f"{args.timestring}_{tween_frame_idx:05}.png" - cv2.imwrite(os.path.join(args.outdir, filename), img) - if anim_args.save_depth_maps: - depth_model.save(os.path.join(args.outdir, f"{args.timestring}_depth_{tween_frame_idx:05}.png"), depth) - if turbo_next_image is not None: - prev_img = turbo_next_image - - # apply transforms to previous frame - if prev_img is not None: - prev_img, depth = anim_frame_warp(prev_img, args, anim_args, keys, frame_idx, depth_model, depth=None, device=root.device, half_precision=root.half_precision) - - # hybrid video motion - warps prev_img to match motion, usually to prepare for compositing - if frame_idx > 0: - if anim_args.hybrid_motion in ['Affine', 'Perspective']: - if anim_args.hybrid_motion_use_prev_img: - matrix = get_matrix_for_hybrid_motion_prev(frame_idx, (args.W, args.H), inputfiles, prev_img, anim_args.hybrid_motion) - else: - matrix = get_matrix_for_hybrid_motion(frame_idx-1, (args.W, args.H), inputfiles, anim_args.hybrid_motion) - prev_img = image_transform_ransac(prev_img, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if anim_args.hybrid_motion in ['Optical Flow']: - if anim_args.hybrid_motion_use_prev_img: - flow = get_flow_for_hybrid_motion_prev(frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, prev_img, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - else: - flow = get_flow_for_hybrid_motion(frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - prev_img = image_transform_optical_flow(prev_img, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - - # do hybrid video - composites video frame into prev_img (now warped if using motion) - if anim_args.hybrid_composite: - args, prev_img = hybrid_composite(args, anim_args, frame_idx, prev_img, depth_model, hybrid_comp_schedules, root) - - # apply color matching - if anim_args.color_coherence != 'None': - # video color matching - hybrid_available = anim_args.hybrid_composite or anim_args.hybrid_motion in ['Optical Flow', 'Affine', 'Perspective'] - if anim_args.color_coherence == 'Video Input' and hybrid_available: - video_color_coherence_frame = int(frame_idx) % int(anim_args.color_coherence_video_every_N_frames) == 0 - if video_color_coherence_frame: - prev_vid_img = Image.open(os.path.join(args.outdir, 'inputframes', get_frame_name(anim_args.video_init_path) + f"{frame_idx:05}.jpg")) - prev_vid_img = prev_vid_img.resize((args.W, args.H), Image.Resampling.LANCZOS) - color_match_sample = np.asarray(prev_vid_img) - color_match_sample = cv2.cvtColor(color_match_sample, cv2.COLOR_RGB2BGR) - if color_match_sample is None: - color_match_sample = prev_img.copy() - else: - prev_img = maintain_colors(prev_img, color_match_sample, anim_args.color_coherence) - - # intercept and override to grayscale - if anim_args.color_force_grayscale: - prev_img = cv2.cvtColor(prev_img, cv2.COLOR_BGR2GRAY) - prev_img = cv2.cvtColor(prev_img, cv2.COLOR_GRAY2BGR) - - # apply scaling - contrast_image = (prev_img * contrast).round().astype(np.uint8) - # anti-blur - if amount > 0: - contrast_image = unsharp_mask(contrast_image, (kernel, kernel), sigma, amount, threshold, mask_image if args.use_mask else None) - # apply frame noising - if args.use_mask or anim_args.use_noise_mask: - args.noise_mask = compose_mask_with_check(root, args, noise_mask_seq, noise_mask_vals, Image.fromarray(cv2.cvtColor(contrast_image, cv2.COLOR_BGR2RGB))) - noised_image = add_noise(contrast_image, noise, args.seed, anim_args.noise_type, - (anim_args.perlin_w, anim_args.perlin_h, anim_args.perlin_octaves, anim_args.perlin_persistence), - args.noise_mask, args.invert_mask) - - # use transformed previous frame as init for current - args.use_init = True - args.init_sample = Image.fromarray(cv2.cvtColor(noised_image, cv2.COLOR_BGR2RGB)) - args.strength = max(0.0, min(1.0, strength)) - - args.scale = scale - - # Pix2Pix Image CFG Scale - does *nothing* with non pix2pix checkpoints - args.pix2pix_img_cfg_scale = float(keys.pix2pix_img_cfg_scale_series[frame_idx]) - - # grab prompt for current frame - args.prompt = prompt_series[frame_idx] - - if args.seed_behavior == 'schedule' or use_parseq: - args.seed = int(keys.seed_schedule_series[frame_idx]) - - if anim_args.enable_checkpoint_scheduling: - args.checkpoint = keys.checkpoint_schedule_series[frame_idx] - else: - args.checkpoint = None - - #SubSeed scheduling - if anim_args.enable_subseed_scheduling: - args.subseed = int(keys.subseed_schedule_series[frame_idx]) - args.subseed_strength = float(keys.subseed_strength_schedule_series[frame_idx]) - - if use_parseq: - args.seed_enable_extras = True - args.subseed = int(keys.subseed_series[frame_idx]) - args.subseed_strength = keys.subseed_strength_series[frame_idx] - - prompt_to_print, *after_neg = args.prompt.strip().split("--neg") - prompt_to_print = prompt_to_print.strip() - after_neg = "".join(after_neg).strip() - - print(f"\033[32mSeed: \033[0m{args.seed}") - print(f"\033[35mPrompt: \033[0m{prompt_to_print}") - if after_neg and after_neg.strip(): - print(f"\033[91mNeg Prompt: \033[0m{after_neg}") - if not using_vid_init: - # print motion table to cli if anim mode = 2D or 3D - if anim_args.animation_mode in ['2D','3D']: - print_render_table(anim_args, keys, frame_idx) - - # grab init image for current frame - elif using_vid_init: - init_frame = get_next_frame(args.outdir, anim_args.video_init_path, frame_idx, False) - print(f"Using video init frame {init_frame}") - args.init_image = init_frame - if anim_args.use_mask_video: - mask_vals['video_mask'] = get_mask_from_file(get_next_frame(args.outdir, anim_args.video_mask_path, frame_idx, True), args) - - if args.use_mask: - args.mask_image = compose_mask_with_check(root, args, mask_seq, mask_vals, args.init_sample) if args.init_sample is not None else None # we need it only after the first frame anyway - - # setting up some arguments for the looper - loop_args.imageStrength = loopSchedulesAndData.image_strength_schedule_series[frame_idx] - loop_args.blendFactorMax = loopSchedulesAndData.blendFactorMax_series[frame_idx] - loop_args.blendFactorSlope = loopSchedulesAndData.blendFactorSlope_series[frame_idx] - loop_args.tweeningFrameSchedule = loopSchedulesAndData.tweening_frames_schedule_series[frame_idx] - loop_args.colorCorrectionFactor = loopSchedulesAndData.color_correction_factor_series[frame_idx] - loop_args.use_looper = loopSchedulesAndData.use_looper - loop_args.imagesToKeyframe = loopSchedulesAndData.imagesToKeyframe - - if scheduled_clipskip is not None: - opts.data["CLIP_stop_at_last_layers"] = scheduled_clipskip - - if anim_args.animation_mode == '3D' and (cmd_opts.lowvram or cmd_opts.medvram): - depth_model.to('cpu') - devices.torch_gc() - lowvram.setup_for_low_vram(sd_model, cmd_opts.medvram) - sd_hijack.model_hijack.hijack(sd_model) - - # sample the diffusion model - image = generate(args, anim_args, loop_args, controlnet_args, root, frame_idx, sampler_name=scheduled_sampler_name) - patience = 10 - - # intercept and override to grayscale - if anim_args.color_force_grayscale: - image = ImageOps.grayscale(image) - image = ImageOps.colorize(image, black ="black", white ="white") - - # reroll blank frame - if not image.getbbox(): - print("Blank frame detected! If you don't have the NSFW filter enabled, this may be due to a glitch!") - if args.reroll_blank_frames == 'reroll': - while not image.getbbox(): - print("Rerolling with +1 seed...") - args.seed += 1 - image = generate(args, anim_args, loop_args, controlnet_args, root, frame_idx, sampler_name=scheduled_sampler_name) - patience -= 1 - if patience == 0: - print("Rerolling with +1 seed failed for 10 iterations! Try setting webui's precision to 'full' and if it fails, please report this to the devs! Interrupting...") - state.interrupted = True - state.current_image = image - return - elif args.reroll_blank_frames == 'interrupt': - print("Interrupting to save your eyes...") - state.interrupted = True - state.current_image = image - image = blank_frame_reroll(image, args, root, frame_idx) - if image == None: - return - - opencv_image = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR) - if not using_vid_init: - prev_img = opencv_image - - if turbo_steps > 1: - turbo_prev_image, turbo_prev_frame_idx = turbo_next_image, turbo_next_frame_idx - turbo_next_image, turbo_next_frame_idx = opencv_image, frame_idx - frame_idx += turbo_steps - else: - filename = f"{args.timestring}_{frame_idx:05}.png" - save_image(image, 'PIL', filename, args, video_args, root) - - if anim_args.save_depth_maps: - if cmd_opts.lowvram or cmd_opts.medvram: - lowvram.send_everything_to_cpu() - sd_hijack.model_hijack.undo_hijack(sd_model) - devices.torch_gc() - depth_model.to(root.device) - depth = depth_model.predict(opencv_image, anim_args, root.half_precision) - depth_model.save(os.path.join(args.outdir, f"{args.timestring}_depth_{frame_idx:05}.png"), depth) - if cmd_opts.lowvram or cmd_opts.medvram: - depth_model.to('cpu') - devices.torch_gc() - lowvram.setup_for_low_vram(sd_model, cmd_opts.medvram) - sd_hijack.model_hijack.hijack(sd_model) - frame_idx += 1 - - state.current_image = image - - args.seed = next_seed(args) - -def print_render_table(anim_args, keys, frame_idx): - from rich.table import Table - from rich import box - table = Table(padding=0, box=box.ROUNDED) - field_names = [] - if anim_args.animation_mode == '2D': - short_zoom = round(keys.zoom_series[frame_idx], 6) - field_names += ["Angle", "Zoom"] - field_names += ["Tr X", "Tr Y"] - if anim_args.animation_mode == '3D': - field_names += ["Tr Z", "Ro X", "Ro Y", "Ro Z"] - if anim_args.enable_perspective_flip: - field_names += ["Pf T", "Pf P", "Pf G", "Pf F"] - for field_name in field_names: - table.add_column(field_name, justify="center") - - rows = [] - if anim_args.animation_mode == '2D': - rows += [str(keys.angle_series[frame_idx]),str(short_zoom)] - rows += [str(keys.translation_x_series[frame_idx]),str(keys.translation_y_series[frame_idx])] - if anim_args.animation_mode == '3D': - rows += [str(keys.translation_z_series[frame_idx]),str(keys.rotation_3d_x_series[frame_idx]),str(keys.rotation_3d_y_series[frame_idx]),str(keys.rotation_3d_z_series[frame_idx])] - if anim_args.enable_perspective_flip: - rows +=[str(keys.perspective_flip_theta_series[frame_idx]), str(keys.perspective_flip_phi_series[frame_idx]), str(keys.perspective_flip_gamma_series[frame_idx]), str(keys.perspective_flip_fv_series[frame_idx])] - table.add_row(*rows) - - console.print(table) \ No newline at end of file diff --git a/spaces/jhj0517/Segment-Anything-Layer-Divider/modules/mask_utils.py b/spaces/jhj0517/Segment-Anything-Layer-Divider/modules/mask_utils.py deleted file mode 100644 index f1b894decdedfbd55d9fdba609da11952678fb77..0000000000000000000000000000000000000000 --- a/spaces/jhj0517/Segment-Anything-Layer-Divider/modules/mask_utils.py +++ /dev/null @@ -1,99 +0,0 @@ -import cv2 -import numpy as np -from pycocotools import mask as coco_mask -from pytoshop import layers -import pytoshop -from pytoshop.enums import BlendMode -from datetime import datetime - - -def generate_random_color(): - return np.random.randint(0, 256), np.random.randint(0, 256), np.random.randint(0, 256) - - -def create_base_layer(image): - rgba_image = cv2.cvtColor(image, cv2.COLOR_RGB2RGBA) - return [rgba_image] - - -def create_mask_layers(image, masks): - layer_list = [] - - for result in masks: - rle = result['segmentation'] - mask = coco_mask.decode(rle).astype(np.uint8) - rgba_image = cv2.cvtColor(image, cv2.COLOR_RGB2RGBA) - rgba_image[..., 3] = cv2.bitwise_and(rgba_image[..., 3], rgba_image[..., 3], mask=mask) - - layer_list.append(rgba_image) - - return layer_list - - -def create_mask_gallery(image, masks): - mask_array_list = [] - label_list = [] - - for index, result in enumerate(masks): - rle = result['segmentation'] - mask = coco_mask.decode(rle).astype(np.uint8) - - rgba_image = cv2.cvtColor(image, cv2.COLOR_RGB2RGBA) - rgba_image[..., 3] = cv2.bitwise_and(rgba_image[..., 3], rgba_image[..., 3], mask=mask) - - mask_array_list.append(rgba_image) - label_list.append(f'Part {index}') - - return [[img, label] for img, label in zip(mask_array_list, label_list)] - - -def create_mask_combined_images(image, masks): - final_result = np.zeros_like(image) - - for result in masks: - rle = result['segmentation'] - mask = coco_mask.decode(rle).astype(np.uint8) - - color = generate_random_color() - colored_mask = np.zeros_like(image) - colored_mask[mask == 1] = color - - final_result = cv2.addWeighted(final_result, 1, colored_mask, 0.5, 0) - - combined_image = cv2.addWeighted(image, 1, final_result, 0.5, 0) - return [combined_image, "masked"] - - -def insert_psd_layer(psd, image_data, layer_name, blending_mode): - channel_data = [layers.ChannelImageData(image=image_data[:, :, i], compression=1) for i in range(4)] - - layer_record = layers.LayerRecord( - channels={-1: channel_data[3], 0: channel_data[0], 1: channel_data[1], 2: channel_data[2]}, - top=0, bottom=image_data.shape[0], left=0, right=image_data.shape[1], - blend_mode=blending_mode, - name=layer_name, - opacity=255, - ) - psd.layer_and_mask_info.layer_info.layer_records.append(layer_record) - return psd - - -def save_psd(input_image_data, layer_data, layer_names, blending_modes): - psd_file = pytoshop.core.PsdFile(num_channels=3, height=input_image_data.shape[0], width=input_image_data.shape[1]) - - for index, layer in enumerate(layer_data): - psd_file = insert_psd_layer(psd_file, layer, layer_names[index], blending_modes[index]) - - timestamp = datetime.now().strftime("%m%d%H%M%S") - with open(f"outputs/psd/result-{timestamp}.psd", 'wb') as output_file: - psd_file.write(output_file) - - -def save_psd_with_masks(image, masks): - original_layer = create_base_layer(image) - mask_layers = create_mask_layers(image, masks) - names = [f'Part {i}' for i in range(len(mask_layers))] - modes = [BlendMode.normal] * (len(mask_layers)+1) - save_psd(image, original_layer+mask_layers, ['Original_Image']+names, modes) - - diff --git a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/commons.py b/spaces/jimschat/VITS-Umamusume-voice-synthesizer/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/jkang/demo-artist-classifier/gradcam_utils.py b/spaces/jkang/demo-artist-classifier/gradcam_utils.py deleted file mode 100644 index 298e43c9af1d3d204d835a3b067d96c8af5c4353..0000000000000000000000000000000000000000 --- a/spaces/jkang/demo-artist-classifier/gradcam_utils.py +++ /dev/null @@ -1,141 +0,0 @@ -''' -Grad-CAM visualization utilities - -- Based on https://keras.io/examples/vision/grad_cam/ - ---- -- 2021-12-18 jkang first created -- 2022-01-16 - - copied from https://huggingface.co/spaces/jkang/demo-gradcam-imagenet/blob/main/utils.py - - updated for artis/trend classifier -''' -import matplotlib.cm as cm - -import os -import re -from glob import glob -import numpy as np -import tensorflow as tf -tfk = tf.keras -K = tfk.backend - -# Disable GPU for testing -# os.environ['CUDA_VISIBLE_DEVICES'] = '-1' - - -def get_imagenet_classes(): - '''Retrieve all 1000 imagenet classes/labels as dictionaries''' - classes = tfk.applications.imagenet_utils.decode_predictions( - np.expand_dims(np.arange(1000), 0), top=1000 - ) - idx2lab = {cla[2]: cla[1] for cla in classes[0]} - lab2idx = {idx2lab[idx]: idx for idx in idx2lab} - return idx2lab, lab2idx - - -def search_by_name(str_part): - '''Search imagenet class by partial matching string''' - results = [key for key in list(lab2idx.keys()) if re.search(str_part, key)] - if len(results) != 0: - return [(key, lab2idx[key]) for key in results] - else: - return [] - - -def get_xception_model(): - '''Get model to use''' - base_model = tfk.applications.xception.Xception - preprocessor = tfk.applications.xception.preprocess_input - decode_predictions = tfk.applications.xception.decode_predictions - last_conv_layer_name = "block14_sepconv2_act" - - model = base_model(weights='imagenet') - grad_model = tfk.models.Model( - inputs=[model.inputs], - outputs=[model.get_layer(last_conv_layer_name).output, - model.output] - ) - return model, grad_model, preprocessor, decode_predictions - - -def get_img_4d_array(image_file, image_size=(299, 299)): - '''Load image as 4d array''' - img = tfk.preprocessing.image.load_img( - image_file, target_size=image_size) # PIL obj - img_array = tfk.preprocessing.image.img_to_array( - img) # float32 numpy array - img_array = np.expand_dims(img_array, axis=0) # 3d -> 4d (1,299,299,3) - return img_array - - -def make_gradcam_heatmap(grad_model, img_array, pred_idx=None): - '''Generate heatmap to overlay with - - img_array: 4d numpy array - - pred_idx: eg. index out of 1000 imagenet classes - if None, argmax is chosen from prediction - ''' - # Get gradient of pred class w.r.t. last conv activation - with tf.GradientTape() as tape: - last_conv_act, predictions = grad_model(img_array) - if pred_idx == None: - pred_idx = tf.argmax(predictions[0]) - class_channel = predictions[:, pred_idx] # (1,1000) => (1,) - - # d(class_channel/last_conv_act) - grads = tape.gradient(class_channel, last_conv_act) - pooled_grads = tf.reduce_mean(grads, axis=( - 0, 1, 2)) # (1,10,10,2048) => (2048,) - - # (10,10,2048) x (2048,1) => (10,10,1) - heatmap = last_conv_act[0] @ pooled_grads[..., tf.newaxis] - heatmap = tf.squeeze(heatmap) # (10,10) - - # Normalize heatmap between 0 and 1 - heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap) - return heatmap, pred_idx.numpy(), predictions.numpy().squeeze() - - -def align_image_with_heatmap(img_array, heatmap, alpha=0.3, cmap='jet'): - '''Align the image with gradcam heatmap - - img_array: 4d numpy array - - heatmap: output of `def make_gradcam_heatmap()` as 2d numpy array - ''' - img_array = img_array.squeeze() # 4d => 3d - - # Rescale to 0-255 range - heatmap_scaled = np.uint8(255 * heatmap) - img_array_scaled = np.uint8(255 * img_array) - - colormap = cm.get_cmap(cmap) - colors = colormap(np.arange(256))[:, :3] # mapping RGB to heatmap - heatmap_colored = colors[heatmap_scaled] # ? still unclear - - # Make RGB colorized heatmap - heatmap_colored = (tfk.preprocessing.image.array_to_img(heatmap_colored) # array => PIL - .resize((img_array.shape[1], img_array.shape[0]))) - heatmap_colored = tfk.preprocessing.image.img_to_array( - heatmap_colored) # PIL => array - - # Overlay image with heatmap - overlaid_img = heatmap_colored * alpha + img_array_scaled - overlaid_img = tfk.preprocessing.image.array_to_img(overlaid_img) - return overlaid_img - - -if __name__ == '__main__': - # Test GradCAM - examples = sorted(glob(os.path.join('examples', '*.jpg'))) - idx2lab, lab2idx = get_imagenet_classes() - - model, grad_model, preprocessor, decode_predictions = get_xception_model() - - img_4d_array = get_img_4d_array(examples[0]) - img_4d_array = preprocessor(img_4d_array) - - heatmap = make_gradcam_heatmap(grad_model, img_4d_array, pred_idx=None) - - img_pil = align_image_with_heatmap( - img_4d_array, heatmap, alpha=0.3, cmap='jet') - - img_pil.save('test.jpg') - print('done') \ No newline at end of file diff --git a/spaces/jmcob/StreamlitGrammarCorrectorStyler/app.py b/spaces/jmcob/StreamlitGrammarCorrectorStyler/app.py deleted file mode 100644 index b4c3e2786ade379e32b406eca391d95d7683b71d..0000000000000000000000000000000000000000 --- a/spaces/jmcob/StreamlitGrammarCorrectorStyler/app.py +++ /dev/null @@ -1,193 +0,0 @@ -import streamlit as st -from multiprocessing import Process -from annotated_text import annotated_text -from bs4 import BeautifulSoup -import pandas as pd -import torch -import math -import re -import json -import requests -import spacy -import errant -import time -import os - -def start_server(): - os.system("python3 -m spacy download en_core_web_sm") - os.system("uvicorn GrammarTokenize:app --port 8080 --host 0.0.0.0 --workers 2") - -def load_models(): - if not is_port_in_use(8080): - with st.spinner(text="Loading models, please wait..."): - proc = Process(target=start_server, args=(), daemon=True) - proc.start() - while not is_port_in_use(8080): - time.sleep(1) - st.success("Model server started.") - else: - st.success("Model server already running...") - st.session_state['models_loaded'] = True - -def is_port_in_use(port): - import socket - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - return s.connect_ex(('0.0.0.0', port)) == 0 - -if 'models_loaded' not in st.session_state: - st.session_state['models_loaded'] = False - - -def show_highlights(input_text, corrected_sentence): - try: - strikeout = lambda x: '\u0336'.join(x) + '\u0336' - highlight_text = highlight(input_text, corrected_sentence) - color_map = {'d':'#faa', 'a':'#afa', 'c':'#fea'} - tokens = re.split(r'(<[dac]\s.*?<\/[dac]>)', highlight_text) - annotations = [] - for token in tokens: - soup = BeautifulSoup(token, 'html.parser') - tags = soup.findAll() - if tags: - _tag = tags[0].name - _type = tags[0]['type'] - _text = tags[0]['edit'] - _color = color_map[_tag] - - if _tag == 'd': - _text = strikeout(tags[0].text) - - annotations.append((_text, _type, _color)) - else: - annotations.append(token) - annotated_text(*annotations) - except Exception as e: - st.error('Some error occured!' + str(e)) - st.stop() - -def show_edits(input_text, corrected_sentence): - try: - edits = get_edits(input_text, corrected_sentence) - df = pd.DataFrame(edits, columns=['type','original word', 'original start', 'original end', 'correct word', 'correct start', 'correct end']) - df = df.set_index('type') - st.table(df) - except Exception as e: - st.error('Some error occured!') - st.stop() - -def highlight(orig, cor): - edits = _get_edits(orig, cor) - orig_tokens = orig.split() - - ignore_indexes = [] - - for edit in edits: - edit_type = edit[0] - edit_str_start = edit[1] - edit_spos = edit[2] - edit_epos = edit[3] - edit_str_end = edit[4] - - # if no_of_tokens(edit_str_start) > 1 ==> excluding the first token, mark all other tokens for deletion - for i in range(edit_spos+1, edit_epos): - ignore_indexes.append(i) - - if edit_str_start == "": - if edit_spos - 1 >= 0: - new_edit_str = orig_tokens[edit_spos - 1] - edit_spos -= 1 - else: - new_edit_str = orig_tokens[edit_spos + 1] - edit_spos += 1 - if edit_type == "PUNCT": - st = "" + new_edit_str + "" - else: - st = "" + new_edit_str + "" - orig_tokens[edit_spos] = st - elif edit_str_end == "": - st = "" + edit_str_start + "" - orig_tokens[edit_spos] = st - else: - st = "" + edit_str_start + "" - orig_tokens[edit_spos] = st - - for i in sorted(ignore_indexes, reverse=True): - del(orig_tokens[i]) - - return(" ".join(orig_tokens)) - - -def _get_edits(orig, cor): - orig = annotator.parse(orig) - cor = annotator.parse(cor) - alignment = annotator.align(orig, cor) - edits = annotator.merge(alignment) - - if len(edits) == 0: - return [] - - edit_annotations = [] - for e in edits: - e = annotator.classify(e) - edit_annotations.append((e.type[2:], e.o_str, e.o_start, e.o_end, e.c_str, e.c_start, e.c_end)) - - if len(edit_annotations) > 0: - return edit_annotations - else: - return [] - -def get_edits(orig, cor): - return _get_edits(orig, cor) - -def get_correction(input_text): - correct_request = "http://0.0.0.0:8080/correct?input_sentence="+input_text - correct_response = requests.get(correct_request) - correct_json = json.loads(correct_response.text) - scored_corrected_sentence = correct_json["scored_corrected_sentence"] - - corrected_sentence, score = scored_corrected_sentence - st.markdown(f'##### Corrected text:') - st.write('') - st.success(corrected_sentence) - exp1 = st.expander(label='Show highlights', expanded=True) - with exp1: - show_highlights(input_text, corrected_sentence) - exp2 = st.expander(label='Show edits') - with exp2: - show_edits(input_text, corrected_sentence) - - -if __name__ == "__main__": - - st.title('Grammar Styler') - st.subheader('Grammar and sentence structure restyler') - examples = [ - "I looked at the med cabinet and meds are out. Can you order me more?", - "Been spendin my whole life jus to her dat song", - "whatdjya think about dat?", - "Lets git sum holesome waves and go surfin" - ] - - if not st.session_state['models_loaded']: - load_models() - - import en_core_web_sm - nlp = en_core_web_sm.load() - annotator = errant.load('en', nlp) - - st.markdown(f'##### Try it now:') - input_text = st.selectbox( - label="Choose an example", - options=examples - ) - st.write("(or)") - input_text = st.text_input( - label="Bring your own sentence", - value=input_text - ) - - if input_text.strip(): - get_correction(input_text) \ No newline at end of file diff --git a/spaces/joao-victor-campos/netflix-recommendation-model/recommendation_app/core/data_handler/__inity__.py b/spaces/joao-victor-campos/netflix-recommendation-model/recommendation_app/core/data_handler/__inity__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/AES.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/AES.py deleted file mode 100644 index 40441f4c5c22a56520fe89cd8b8b38240b0faea1..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/AES.py +++ /dev/null @@ -1,234 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Cipher/AES.py : AES -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -import sys - -from Crypto.Cipher import _create_cipher -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - c_size_t, c_uint8_ptr) - -from Crypto.Util import _cpu_features -from Crypto.Random import get_random_bytes - -MODE_ECB = 1 #: Electronic Code Book (:ref:`ecb_mode`) -MODE_CBC = 2 #: Cipher-Block Chaining (:ref:`cbc_mode`) -MODE_CFB = 3 #: Cipher Feedback (:ref:`cfb_mode`) -MODE_OFB = 5 #: Output Feedback (:ref:`ofb_mode`) -MODE_CTR = 6 #: Counter mode (:ref:`ctr_mode`) -MODE_OPENPGP = 7 #: OpenPGP mode (:ref:`openpgp_mode`) -MODE_CCM = 8 #: Counter with CBC-MAC (:ref:`ccm_mode`) -MODE_EAX = 9 #: :ref:`eax_mode` -MODE_SIV = 10 #: Synthetic Initialization Vector (:ref:`siv_mode`) -MODE_GCM = 11 #: Galois Counter Mode (:ref:`gcm_mode`) -MODE_OCB = 12 #: Offset Code Book (:ref:`ocb_mode`) - - -_cproto = """ - int AES_start_operation(const uint8_t key[], - size_t key_len, - void **pResult); - int AES_encrypt(const void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int AES_decrypt(const void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int AES_stop_operation(void *state); - """ - - -# Load portable AES -_raw_aes_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_aes", - _cproto) - -# Try to load AES with AES NI instructions -try: - _raw_aesni_lib = None - if _cpu_features.have_aes_ni(): - _raw_aesni_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_aesni", - _cproto.replace("AES", - "AESNI")) -# _raw_aesni may not have been compiled in -except OSError: - pass - - -def _create_base_cipher(dict_parameters): - """This method instantiates and returns a handle to a low-level - base cipher. It will absorb named parameters in the process.""" - - use_aesni = dict_parameters.pop("use_aesni", True) - - try: - key = dict_parameters.pop("key") - except KeyError: - raise TypeError("Missing 'key' parameter") - - if len(key) not in key_size: - raise ValueError("Incorrect AES key length (%d bytes)" % len(key)) - - if use_aesni and _raw_aesni_lib: - start_operation = _raw_aesni_lib.AESNI_start_operation - stop_operation = _raw_aesni_lib.AESNI_stop_operation - else: - start_operation = _raw_aes_lib.AES_start_operation - stop_operation = _raw_aes_lib.AES_stop_operation - - cipher = VoidPointer() - result = start_operation(c_uint8_ptr(key), - c_size_t(len(key)), - cipher.address_of()) - if result: - raise ValueError("Error %X while instantiating the AES cipher" - % result) - return SmartPointer(cipher.get(), stop_operation) - - -def _derive_Poly1305_key_pair(key, nonce): - """Derive a tuple (r, s, nonce) for a Poly1305 MAC. - - If nonce is ``None``, a new 16-byte nonce is generated. - """ - - if len(key) != 32: - raise ValueError("Poly1305 with AES requires a 32-byte key") - - if nonce is None: - nonce = get_random_bytes(16) - elif len(nonce) != 16: - raise ValueError("Poly1305 with AES requires a 16-byte nonce") - - s = new(key[:16], MODE_ECB).encrypt(nonce) - return key[16:], s, nonce - - -def new(key, mode, *args, **kwargs): - """Create a new AES cipher. - - Args: - key(bytes/bytearray/memoryview): - The secret key to use in the symmetric cipher. - - It must be 16 (*AES-128)*, 24 (*AES-192*) or 32 (*AES-256*) bytes long. - - For ``MODE_SIV`` only, it doubles to 32, 48, or 64 bytes. - mode (a ``MODE_*`` constant): - The chaining mode to use for encryption or decryption. - If in doubt, use ``MODE_EAX``. - - Keyword Args: - iv (bytes/bytearray/memoryview): - (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, - and ``MODE_OPENPGP`` modes). - - The initialization vector to use for encryption or decryption. - - For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 16 bytes long. - - For ``MODE_OPENPGP`` mode only, - it must be 16 bytes long for encryption - and 18 bytes for decryption (in the latter case, it is - actually the *encrypted* IV which was prefixed to the ciphertext). - - If not provided, a random byte string is generated (you must then - read its value with the :attr:`iv` attribute). - - nonce (bytes/bytearray/memoryview): - (Only applicable for ``MODE_CCM``, ``MODE_EAX``, ``MODE_GCM``, - ``MODE_SIV``, ``MODE_OCB``, and ``MODE_CTR``). - - A value that must never be reused for any other encryption done - with this key (except possibly for ``MODE_SIV``, see below). - - For ``MODE_EAX``, ``MODE_GCM`` and ``MODE_SIV`` there are no - restrictions on its length (recommended: **16** bytes). - - For ``MODE_CCM``, its length must be in the range **[7..13]**. - Bear in mind that with CCM there is a trade-off between nonce - length and maximum message size. Recommendation: **11** bytes. - - For ``MODE_OCB``, its length must be in the range **[1..15]** - (recommended: **15**). - - For ``MODE_CTR``, its length must be in the range **[0..15]** - (recommended: **8**). - - For ``MODE_SIV``, the nonce is optional, if it is not specified, - then no nonce is being used, which renders the encryption - deterministic. - - If not provided, for modes other than ``MODE_SIV``, a random - byte string of the recommended length is used (you must then - read its value with the :attr:`nonce` attribute). - - segment_size (integer): - (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext - are segmented in. It must be a multiple of 8. - If not specified, it will be assumed to be 8. - - mac_len (integer): - (Only ``MODE_EAX``, ``MODE_GCM``, ``MODE_OCB``, ``MODE_CCM``) - Length of the authentication tag, in bytes. - - It must be even and in the range **[4..16]**. - The recommended value (and the default, if not specified) is **16**. - - msg_len (integer): - (Only ``MODE_CCM``). Length of the message to (de)cipher. - If not specified, ``encrypt`` must be called with the entire message. - Similarly, ``decrypt`` can only be called once. - - assoc_len (integer): - (Only ``MODE_CCM``). Length of the associated data. - If not specified, all associated data is buffered internally, - which may represent a problem for very large messages. - - initial_value (integer or bytes/bytearray/memoryview): - (Only ``MODE_CTR``). - The initial value for the counter. If not present, the cipher will - start counting from 0. The value is incremented by one for each block. - The counter number is encoded in big endian mode. - - counter (object): - (Only ``MODE_CTR``). - Instance of ``Crypto.Util.Counter``, which allows full customization - of the counter block. This parameter is incompatible to both ``nonce`` - and ``initial_value``. - - use_aesni: (boolean): - Use Intel AES-NI hardware extensions (default: use if available). - - Returns: - an AES object, of the applicable mode. - """ - - kwargs["add_aes_modes"] = True - return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) - - -# Size of a data block (in bytes) -block_size = 16 -# Size of a key (in bytes) -key_size = (16, 24, 32) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Math/Numbers.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Math/Numbers.py deleted file mode 100644 index c2c4483d6856943fde69268afa133b210da0e405..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Math/Numbers.py +++ /dev/null @@ -1,42 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -__all__ = ["Integer"] - -try: - from Crypto.Math._IntegerGMP import IntegerGMP as Integer - from Crypto.Math._IntegerGMP import implementation as _implementation -except (ImportError, OSError, AttributeError): - try: - from Crypto.Math._IntegerCustom import IntegerCustom as Integer - from Crypto.Math._IntegerCustom import implementation as _implementation - except (ImportError, OSError): - from Crypto.Math._IntegerNative import IntegerNative as Integer - _implementation = {} diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/DNAME.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/DNAME.py deleted file mode 100644 index 556bff59e3de793c9321415897bad4a321e321d8..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/DNAME.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import dns.immutable -import dns.rdtypes.nsbase - - -@dns.immutable.immutable -class DNAME(dns.rdtypes.nsbase.UncompressedNS): - - """DNAME record""" - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - self.target.to_wire(file, None, origin, canonicalize) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/token_counter/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/token_counter/__init__.py deleted file mode 100644 index 1d4640565ae2765d9ca96a509dc9809217f62f2f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/token_counter/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Init file.""" diff --git a/spaces/johnowhitaker/color-guided-wikiart-diffusion/app.py b/spaces/johnowhitaker/color-guided-wikiart-diffusion/app.py deleted file mode 100644 index a9d6ced56c0a97aa9ef8fb9600f6333613aa9949..0000000000000000000000000000000000000000 --- a/spaces/johnowhitaker/color-guided-wikiart-diffusion/app.py +++ /dev/null @@ -1,68 +0,0 @@ -import gradio as gr -import torch, torchvision -import torch.nn.functional as F -import numpy as np -from PIL import Image, ImageColor -from diffusers import DDPMPipeline -from diffusers import DDIMScheduler - -device = 'mps' if torch.backends.mps.is_available() else 'cuda' if torch.cuda.is_available() else 'cpu' - -# Load the pretrained pipeline -pipeline_name = 'johnowhitaker/sd-class-wikiart-from-bedrooms' -image_pipe = DDPMPipeline.from_pretrained(pipeline_name).to(device) - -# Set up the scheduler -scheduler = DDIMScheduler.from_pretrained(pipeline_name) -scheduler.set_timesteps(num_inference_steps=20) - -# The guidance function -def color_loss(images, target_color=(0.1, 0.9, 0.5)): - """Given a target color (R, G, B) return a loss for how far away on average - the images' pixels are from that color. Defaults to a light teal: (0.1, 0.9, 0.5) """ - target = torch.tensor(target_color).to(images.device) * 2 - 1 # Map target color to (-1, 1) - target = target[None, :, None, None] # Get shape right to work with the images (b, c, h, w) - error = torch.abs(images - target).mean() # Mean absolute difference between the image pixels and the target color - return error - -# And the core function to generate an image given the relevant inputs -def generate(color, guidance_loss_scale): - target_color = ImageColor.getcolor(color, "RGB") # Target color as RGB - target_color = [a/255 for a in target_color] # Rescale from (0, 255) to (0, 1) - x = torch.randn(1, 3, 256, 256).to(device) - for i, t in enumerate(scheduler.timesteps): - model_input = scheduler.scale_model_input(x, t) - with torch.no_grad(): - noise_pred = image_pipe.unet(model_input, t)["sample"] - x = x.detach().requires_grad_() - x0 = scheduler.step(noise_pred, t, x).pred_original_sample - loss = color_loss(x0, target_color) * guidance_loss_scale - cond_grad = -torch.autograd.grad(loss, x)[0] - x = x.detach() + cond_grad - x = scheduler.step(noise_pred, t, x).prev_sample - grid = torchvision.utils.make_grid(x, nrow=4) - im = grid.permute(1, 2, 0).cpu().clip(-1, 1)*0.5 + 0.5 - im = Image.fromarray(np.array(im*255).astype(np.uint8)) - im.save('test.jpeg') - return im - -# See the gradio docs for the types of inputs and outputs available -inputs = [ - gr.ColorPicker(label="color", value='55FFAA'), # Add any inputs you need here - gr.Slider(label="guidance_scale", minimum=0, maximum=30, value=3) -] -outputs = gr.Image(label="result") - -# Setting up a minimal interface to our function: -demo = gr.Interface( - fn=generate, - inputs=inputs, - outputs=outputs, - examples=[ - ["#BB2266", 3],["#44CCAA", 5] # You can provide some example inputs to get people started - ], -) - -# And launching -if __name__ == "__main__": - demo.launch(enable_queue=True) diff --git a/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/modules/diffusionmodules/util.py b/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/modules/diffusionmodules/util.py deleted file mode 100644 index c1dc1d424015d2c6c92342b85a992f931e5a1dc1..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/modules/diffusionmodules/util.py +++ /dev/null @@ -1,267 +0,0 @@ -# adopted from -# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py -# and -# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# and -# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py -# -# thanks! - - -import os -import math -import torch -import torch.nn as nn -import numpy as np -from einops import repeat - -from ldmlib.util import instantiate_from_config - - -def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if schedule == "linear": - betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - - elif schedule == "cosine": - timesteps = ( - torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s - ) - alphas = timesteps / (1 + cosine_s) * np.pi / 2 - alphas = torch.cos(alphas).pow(2) - alphas = alphas / alphas[0] - betas = 1 - alphas[1:] / alphas[:-1] - betas = np.clip(betas, a_min=0, a_max=0.999) - - elif schedule == "sqrt_linear": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) - elif schedule == "sqrt": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 - else: - raise ValueError(f"schedule '{schedule}' unknown.") - return betas.numpy() - - -def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): - if ddim_discr_method == 'uniform': - c = num_ddpm_timesteps // num_ddim_timesteps - ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) - elif ddim_discr_method == 'quad': - ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) - else: - raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') - - # assert ddim_timesteps.shape[0] == num_ddim_timesteps - # add one to get the final alpha values right (the ones from first scale to data during sampling) - steps_out = ddim_timesteps + 1 - if verbose: - print(f'Selected timesteps for ddim sampler: {steps_out}') - return steps_out - - -def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): - # select alphas for computing the variance schedule - alphas = alphacums[ddim_timesteps] - alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) - - # according the the formula provided in https://arxiv.org/abs/2010.02502 - sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) - if verbose: - print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') - print(f'For the chosen value of eta, which is {eta}, ' - f'this results in the following sigma_t schedule for ddim sampler {sigmas}') - return sigmas, alphas, alphas_prev - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - - with torch.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with torch.enable_grad(): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = torch.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads - - -def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - if not repeat_only: - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - else: - embedding = repeat(timesteps, 'b -> b d', d=dim) - return embedding - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class HybridConditioner(nn.Module): - - def __init__(self, c_concat_config, c_crossattn_config): - super().__init__() - self.concat_conditioner = instantiate_from_config(c_concat_config) - self.crossattn_conditioner = instantiate_from_config(c_crossattn_config) - - def forward(self, c_concat, c_crossattn): - c_concat = self.concat_conditioner(c_concat) - c_crossattn = self.crossattn_conditioner(c_crossattn) - return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]} - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() diff --git a/spaces/jordonpeter01/MusicGen2/tests/models/test_musicgen.py b/spaces/jordonpeter01/MusicGen2/tests/models/test_musicgen.py deleted file mode 100644 index d43cf73763f6c690ab0b277227ac225b286fa143..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen2/tests/models/test_musicgen.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.models import MusicGen - - -class TestSEANetModel: - def get_musicgen(self): - mg = MusicGen.get_pretrained(name='debug', device='cpu') - mg.set_generation_params(duration=2.0, extend_stride=2.) - return mg - - def test_base(self): - mg = self.get_musicgen() - assert mg.frame_rate == 25 - assert mg.sample_rate == 32000 - assert mg.audio_channels == 1 - - def test_generate_unconditional(self): - mg = self.get_musicgen() - wav = mg.generate_unconditional(3) - assert list(wav.shape) == [3, 1, 64000] - - def test_generate_continuation(self): - mg = self.get_musicgen() - prompt = torch.randn(3, 1, 32000) - wav = mg.generate_continuation(prompt, 32000) - assert list(wav.shape) == [3, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - with pytest.raises(AssertionError): - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort', 'one too many']) - - def test_generate(self): - mg = self.get_musicgen() - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] - - def test_generate_long(self): - mg = self.get_musicgen() - mg.max_duration = 3. - mg.set_generation_params(duration=4., extend_stride=2.) - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 32000 * 4] diff --git a/spaces/jorge-henao/ask2democracycol/about.py b/spaces/jorge-henao/ask2democracycol/about.py deleted file mode 100644 index 617d7dd6403c1674dbf4149c716c1571eb228eaa..0000000000000000000000000000000000000000 --- a/spaces/jorge-henao/ask2democracycol/about.py +++ /dev/null @@ -1,82 +0,0 @@ -from pinecone_quieries import PineconeProposalQueries -import streamlit as st - -def show_about_ask2democracy(): - description = """ -

    Sobre esta iniciativa

    -

    El debate ciudadano generalmente está sustentado en documentos que salvo pocas excepciones, casi nadie lee. - En este demo se han indexado algunos textos relevantes para la discución pública que suelen estar dispersos y poco accesibles. Además, se apoya en el estado del arte de la inteligencia artificial (abajo más detalles) , permitiendo explorar los documentos haciéndoles preguntas en español. -

    - Por otro lado, las alucinaciones generadas por modelos de lenguaje grandes como ChatGPT/GPT-4 son un problema que en la práctica resulta en desinformación y posibles consecuencias aún desconocidas. OpenAI ha liderado el camino en el control de estas alucinaciones mediante el uso de RLHF para generar texto a partir del conocimiento "congelado" de los modelos de lenguaje. Sin embargo, esta aproximación no es viable en muchos dominios específicos. -

    - En este demo se aborda el problema de las alucinaciones utilizando una arquitectura RAG, Retrieval Augmented Generation. En el pipeline de consulta, se utilizan modelos sentence transformers para obtener el top k de documentos candidatos, modelos Roberta para generar respuestas abstractas tomadas de las fuentes y modelos generativos para aumentar las respuestas. - Dándole un estilo conversacional similar al de ChatGPT pero basado en fuentes. -

    - También se busca contribuir a la inteligencia artificial abierta y en español, mediante la construcción de datasets y el entrenamiento de modelos de lenguaje adaptados para las discusiones democráticas. Algo que puede ayudar a elevar la calidad del debate en todos los países de habla hispana. -

    - Textos indexados: Propuesta reforma pensional de Marzo 22 de 2023, Propuesta reforma de la salud del 13 febrero 2023 , Capítulo de hallazgos y recomendaciones de la comisión de la verdad sobre el conflicto armado Colombiano (trabajo en progreso, si quieres apoyar escríbeme) -

    - Creado por Jorge Henao 🇨🇴 Twitter LinkedIn Linktree -
    - Con el apoyo de David Torres 🇨🇴 Twitter LinkedIn -
    -

    -

    Sobre el trabajo realizado durante la Hackathon Somos NLP 2023

    - Las siguientes contribuiciones fueron realizadas durante las fechas de la Hackathon (20 de Marzo al 9 de Abril de 2023): -

    En el espacio demo:

    -
      -
    • Refactor/Ajustes de integración con la base de datos vectorial Pinecone.
    • -
    • Pre-procesado e indexación de la propuesta de reforma pensional de Colombia de Marzo 2023.
    • -
    • Refactor UX y ajustes de usabilidad de la interfaz de usuario.
    • -
    • Ajustes de integración con OpenAI
    • -
    • Pruebas/Ajustes en el pipeline de consulta Sentence transformers usando texto en español y xlm-roberta-base-squad2-distilled
    • -
    -

    Modelos de lenguaje:

    - Fueron entrenados dos modelos Baizemocracy basados en LLaMA-7B con foco en aumentar los documentos retornados en el pipeline de consulta, con el fin de hacerlo más conversacional usando modelos open source en español. - Los siguientes modelos fueron entrenados entrenados con un dataset construido durante la hackathon además de varios datasets orientados a Question answering y Chat. -
      -
    • baizemocracy-lora-7B-cfqa: Esta variación del modelo es más enfocada en generar respuestas factuales dado un contexto basado en fuentes.
    • -
    • baizemocracy-lora-7B-cfqa-conv: Esta variación del modelo tiene un estílo más conversacional para generar respuestas factuales dado un contexto basado en fuentes.
    • -
    -

    Datasets:

    -
      -
    • ask2democracy-cfqa-salud-pension: Un datset de tipo instrucciones con respuestas a preguntas generadas a partir de textos de reforma sobre salud y pensiones en español
    • -
    -

    ¿Cómo utilizar este espacio?

    - Selecciona el de documento que quieres explorar en el panel de la izquierda, escribe preguntas en la caja de texto y presiona el botón. - No se trata de un sistema de búsquedas basado en palabras clave, por el contrario, puedes redactar preguntas más extensas y elaboradas. Cuanto más contexto le des a la pregunta mejores resultados obtienes. -

    Integración opcional con OpenAI

    - Este demo usa recursos de computo limitados sin costo para la gente (si quieres ayudar a que sea más rápido ecríbeme). - De manera opcional, si tienes una cuenta en OpenAI también puedes activar la integración copiando tu API key en el panel de la izquierda. - Una vez ingreses el api key, cada vez que hagas una pregunta el sistema la usará para elaborar una respuesta breve a partir de los resultados de búsqueda obtenidos, basándose siempre en las fuentes oficiales. - También puedes configurar que tan larga quieres que sea la respuesta (max tokens), y que tan creativas (temperatura). -

    Nota:El sistema no guarda tu API key, sólo la utiliza para aumentar tus consultas mientras lo uses. -

    Inteligencia artificial y democracia

    - Pretende ayudar a construir democracia participativa apaloncándose en el estado del arte de la inteligencia artificial. - Al ser un demo accesible en web, puede ayudarle a un ciudadano del común a tener una opinión más informada, ayudándole a ser partícipe del debate público haciendo preguntas directamente a las fuentes en su propio lenguaje y llegando a sus propias conclusiones. -

    -Respecto a la inteligencia artificial hay algunas hipótesis que se quieren probar: -

      -
    • ¿Que tan efectivo puede ser un sistema de búsquedas con modelos de inteligencia artificial abiertos, para ayudar a la gente a entender discuciones ciudadanas relevantes en español?
    • -
    • ¿Que tan creativa puede ser la ingeligencia artificial en esa materia?
    • -
    • ¿Puede la inteligencia artificial abierta, ayudarle a la gente a entender documentos legislativos: propuestas de reforma, planes de gobierno, y en general documentos de discución pública?
    • -
    • ¿Puede un sistema RAG usando modelos abiertos mejorar las halucinaciones presentadas en sistemas como ChatGPT/GPT-4 de OpenAI para el entendimiento de discusiones democráticas en español?
    • -
    - Por lo anterior, se busca contribuir a la inteligencia artificial abierta y en español, mediante la construcción de datasets y el entrenamiento de modelos de lenguaje adaptados para las discusiones democráticas. - Algo que puede ayudar a elevar la calidad del debate en todos los países de habla hispana. -

    Ask2Democracy v0.3

    - Se utiliza una arquitectura RAG(Retrieval Augmented Generation) para aumentar las respuestas basadas en fuentes de manera conversacional. - Esta version usa sentence transformers (Cosine similarity), una base de dactos vectorial Pinecone para almacenar los embeddings, Haystack framework y la integración con OpenAI. - Los modelos de lenguaje transformers utilizados son: - -sentence-transformers/multi-qa-MiniLM-L6-cos-v1 -deepset/xlm-roberta-base-squad2-distilled - - repo en github con FastAPI -

    Beta disclaimer

    - Las respuestas que arroja el sistema no han sido pregrabadas ni basadas en opiniones. Todas son respuestas extraídas de fuentes oficiales. - Este demo usa modelos de lenguaje para entender el lenguaje español, sin embargo, necesita de un mayor entrenamiento por lo que, en ocasiones, puede ser confuso y no tan preciso. - Si quieres apoyar escríbeme a jorge.henao@diezonce.co -

    - """ - st.markdown(description, unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/jotarodadada/animeCf/upcunet_v3.py b/spaces/jotarodadada/animeCf/upcunet_v3.py deleted file mode 100644 index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000 --- a/spaces/jotarodadada/animeCf/upcunet_v3.py +++ /dev/null @@ -1,714 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F -import os, sys -import numpy as np - -root_path = os.path.abspath('.') -sys.path.append(root_path) - - -class SEBlock(nn.Module): - def __init__(self, in_channels, reduction=8, bias=False): - super(SEBlock, self).__init__() - self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias) - self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias) - - def forward(self, x): - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half() - else: - x0 = torch.mean(x, dim=(2, 3), keepdim=True) - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - def forward_mean(self, x, x0): - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - -class UNetConv(nn.Module): - def __init__(self, in_channels, mid_channels, out_channels, se): - super(UNetConv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(in_channels, mid_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - nn.Conv2d(mid_channels, out_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - ) - if se: - self.seblock = SEBlock(out_channels, reduction=8, bias=True) - else: - self.seblock = None - - def forward(self, x): - z = self.conv(x) - if self.seblock is not None: - z = self.seblock(z) - return z - - -class UNet1(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet1x3(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1x3, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet2(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet2, self).__init__() - - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 64, 128, se=True) - self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0) - self.conv3 = UNetConv(128, 256, 128, se=True) - self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0) - self.conv4 = UNetConv(128, 64, 64, se=True) - self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv5 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3(x3) - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4(x2 + x3) - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - def forward_a(self, x): # conv234结尾有se - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x2): # conv234结尾有se - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3.conv(x3) - return x3 - - def forward_c(self, x2, x3): # conv234结尾有se - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4.conv(x2 + x3) - return x4 - - def forward_d(self, x1, x4): # conv234结尾有se - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - -class UpCunet2x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet2x, self).__init__() - self.unet1 = UNet1(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 36, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 36, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2] - return res # - - -class UpCunet3x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet3x, self).__init__() - self.unet1 = UNet1x3(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 4 + 1) * 4 - pw = ((w0 - 1) // 4 + 1) * 4 - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除 - else: - crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 28, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 28, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop # - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3] - return res - - -class UpCunet4x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet4x, self).__init__() - self.unet1 = UNet1(in_channels, 64, deconv=True) - self.unet2 = UNet2(64, 64, deconv=False) - self.ps = nn.PixelShuffle(2) - self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True) - - def forward(self, x, tile_mode): - n, c, h0, w0 = x.shape - x00 = x - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - x = self.conv_final(x) - x = F.pad(x, (-1, -1, -1, -1)) - x = self.ps(x) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4] - x += F.interpolate(x00, scale_factor=4, mode='nearest') - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 38, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 38, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - x_crop = self.conv_final(x_crop) - x_crop = F.pad(x_crop, (-1, -1, -1, -1)) - x_crop = self.ps(x_crop) - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape) - res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4] - res += F.interpolate(x00, scale_factor=4, mode='nearest') - return res # - - -class RealWaifuUpScaler(object): - def __init__(self, scale, weight_path, half, device): - weight = torch.load(weight_path, map_location="cpu") - self.model = eval("UpCunet%sx" % scale)() - if (half == True): - self.model = self.model.half().to(device) - else: - self.model = self.model.to(device) - self.model.load_state_dict(weight, strict=True) - self.model.eval() - self.half = half - self.device = device - - def np2tensor(self, np_frame): - if (self.half == False): - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255 - else: - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255 - - def tensor2np(self, tensor): - if (self.half == False): - return ( - np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0))) - else: - return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), - (1, 2, 0))) - - def __call__(self, frame, tile_mode): - with torch.no_grad(): - tensor = self.np2tensor(frame) - result = self.tensor2np(self.model(tensor, tile_mode)) - return result - - -if __name__ == "__main__": - ###########inference_img - import time, cv2, sys - from time import time as ttime - - for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3), - ("weights_v3/up4x-latest-denoise3x.pth", 4)]: - for tile_mode in [0, 1, 2, 3, 4]: - upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0") - input_dir = "%s/input_dir1" % root_path - output_dir = "%s/opt-dir-all-test" % root_path - os.makedirs(output_dir, exist_ok=True) - for name in os.listdir(input_dir): - print(name) - tmp = name.split(".") - inp_path = os.path.join(input_dir, name) - suffix = tmp[-1] - prefix = ".".join(tmp[:-1]) - tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - print(inp_path, tmp_path) - # 支持中文路径 - # os.link(inp_path, tmp_path)#win用硬链接 - os.symlink(inp_path, tmp_path) # linux用软链接 - frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]] - t0 = ttime() - result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1] - t1 = ttime() - print(prefix, "done", t1 - t0) - tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - cv2.imwrite(tmp_opt_path, result) - n = 0 - while (1): - if (n == 0): - suffix = "_%sx_tile%s.png" % (scale, tile_mode) - else: - suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) # - if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False): - break - else: - n += 1 - final_opt_path = os.path.join(output_dir, prefix + suffix) - os.rename(tmp_opt_path, final_opt_path) - os.remove(tmp_path) diff --git a/spaces/kadirnar/yolox/configs/yolox_nano.py b/spaces/kadirnar/yolox/configs/yolox_nano.py deleted file mode 100644 index 8955dd2a7748c900cab7dca11adf877cd2cf5abd..0000000000000000000000000000000000000000 --- a/spaces/kadirnar/yolox/configs/yolox_nano.py +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import os - -import torch.nn as nn - -from yolox.exp import Exp as MyExp - - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.depth = 0.33 - self.width = 0.25 - self.input_size = (416, 416) - self.random_size = (10, 20) - self.mosaic_scale = (0.5, 1.5) - self.test_size = (416, 416) - self.mosaic_prob = 0.5 - self.enable_mixup = False - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - - def get_model(self, sublinear=False): - - def init_yolo(M): - for m in M.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eps = 1e-3 - m.momentum = 0.03 - if "model" not in self.__dict__: - from yolox.models import YOLOX, YOLOPAFPN, YOLOXHead - in_channels = [256, 512, 1024] - # NANO model use depthwise = True, which is main difference. - backbone = YOLOPAFPN( - self.depth, self.width, in_channels=in_channels, - act=self.act, depthwise=True, - ) - head = YOLOXHead( - self.num_classes, self.width, in_channels=in_channels, - act=self.act, depthwise=True - ) - self.model = YOLOX(backbone, head) - - self.model.apply(init_yolo) - self.model.head.initialize_biases(1e-2) - return self.model diff --git a/spaces/kahnchana/clippy/app.py b/spaces/kahnchana/clippy/app.py deleted file mode 100644 index fb06b1d6d0e5eb44f978b532836e9c2a7c67cf55..0000000000000000000000000000000000000000 --- a/spaces/kahnchana/clippy/app.py +++ /dev/null @@ -1,67 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from PIL import Image - -from infer_model import CLIPpyModel -from utils import get_similarity, get_transform, ade_palette, get_cmap_image - -pretrained_ckpt = "https://github.com/kahnchana/clippy/releases/download/v1.0/clippy_5k.pt" -ckpt = torch.utils.model_zoo.load_url(pretrained_ckpt) - -clippy = CLIPpyModel() -transform = get_transform((224, 224)) - -msg = clippy.load_state_dict(ckpt, strict=False) - -palette = ade_palette() - - -def process_image(img, captions): - sample_text = [x.strip() for x in captions.split(",")] - sample_prompts = [f"a photo of a {x}" for x in sample_text] - - image = Image.fromarray(img) - image_vector = clippy.encode_image(transform(image).unsqueeze(0), get_pos_tokens=True) - text_vector = clippy.text.encode(sample_prompts, convert_to_tensor=True) - - similarity = get_similarity(image_vector, text_vector, (224, 224), do_argmax=True)[0, 0].numpy() - rgb_seg = np.zeros((similarity.shape[0], similarity.shape[1], 3), dtype=np.uint8) - for idx, _ in enumerate(sample_text): - rgb_seg[similarity == idx] = palette[idx] - - joint = Image.blend(image, Image.fromarray(rgb_seg), 0.5) - cmap = get_cmap_image({label: tuple(palette[idx]) for idx, label in enumerate(sample_text)}) - - return cmap, rgb_seg, joint - - -title = 'CLIPpy' - -description = """ -Gradio Demo for CLIPpy: Perceptual Grouping in Contrastive Vision Language Models. \n \n -Upload an image and type in a set of comma separated labels (e.g.: "man, woman, background"). -CLIPPy will segment the image, according to the set of class label you provide. -""" - -article = """ -

    - -Perceptual Grouping in Contrastive Vision Language Models - -| -Github Repository

    -""" - -demo = gr.Interface( - fn=process_image, - inputs=[gr.Image(shape=(224, 224)), "text"], - outputs=[gr.Image(shape=(224, 224)).style(height=150), - gr.Image(shape=(224, 224)).style(height=260), - gr.Image(shape=(224, 224)).style(height=260)], - title=title, - description=description, - article=article, -) - -demo.launch() diff --git a/spaces/keisuke-tada/gpt-playground/app.py b/spaces/keisuke-tada/gpt-playground/app.py deleted file mode 100644 index 3026f415b91ff58e0bbc784970f6ba700b01b89a..0000000000000000000000000000000000000000 --- a/spaces/keisuke-tada/gpt-playground/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import os -import openai -import streamlit as st - -openai.api_key = os.getenv("OPENAI_API_KEY") - -temperature = 0 - -prompt = st.text_area("Prompt") - -if prompt: - output_box = st.empty() - share_box = st.empty() - - content = [] - - for chunk in openai.ChatCompletion.create( - model="gpt-3.5-turbo", - temperature=temperature, - messages=[{"role": "user", "content": prompt}], - stream=True, - ): - chunk_content = chunk["choices"][0].get("delta", {}).get("content") - if chunk_content is not None: - content.append(chunk_content) - output = "".join(content).strip() - output_box.markdown(output) - - output = "".join(content).strip() - share_box.markdown( - f""" -```` -**Prompt:** - -``` -{prompt} -``` - -**Output:** - -``` -{output} -``` -```` - """ - ) \ No newline at end of file diff --git a/spaces/keithhon/logo-generator/README.md b/spaces/keithhon/logo-generator/README.md deleted file mode 100644 index be170abf341cd4fe36fc5af45c903fea6699db8d..0000000000000000000000000000000000000000 --- a/spaces/keithhon/logo-generator/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Logo Generator -emoji: 🔥 -colorFrom: red -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false ---- \ No newline at end of file diff --git a/spaces/kevinwang676/Bark-Voice-Cloning/bark/hubert/pre_kmeans_hubert.py b/spaces/kevinwang676/Bark-Voice-Cloning/bark/hubert/pre_kmeans_hubert.py deleted file mode 100644 index 5208bd2792dd32e7f761ae787927a70bdcb2e5d6..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-Voice-Cloning/bark/hubert/pre_kmeans_hubert.py +++ /dev/null @@ -1,107 +0,0 @@ -""" -Modified HuBERT model without kmeans. -Original author: https://github.com/lucidrains/ -Modified by: https://www.github.com/gitmylo/ -License: MIT -""" - -# Modified code from https://github.com/lucidrains/audiolm-pytorch/blob/main/audiolm_pytorch/hubert_kmeans.py - -from pathlib import Path - -import torch -from torch import nn -from einops import pack, unpack - -import fairseq - -from torchaudio.functional import resample - -from audiolm_pytorch.utils import curtail_to_multiple - -import logging -logging.root.setLevel(logging.ERROR) - - -def exists(val): - return val is not None - - -def default(val, d): - return val if exists(val) else d - - -class CustomHubert(nn.Module): - """ - checkpoint and kmeans can be downloaded at https://github.com/facebookresearch/fairseq/tree/main/examples/hubert - or you can train your own - """ - - def __init__( - self, - checkpoint_path, - target_sample_hz=16000, - seq_len_multiple_of=None, - output_layer=9, - device=None - ): - super().__init__() - self.target_sample_hz = target_sample_hz - self.seq_len_multiple_of = seq_len_multiple_of - self.output_layer = output_layer - - if device is not None: - self.to(device) - - model_path = Path(checkpoint_path) - - assert model_path.exists(), f'path {checkpoint_path} does not exist' - - print(f"Loading Hubert {checkpoint_path}") - checkpoint = torch.load(checkpoint_path) - load_model_input = {checkpoint_path: checkpoint} - model, *_ = fairseq.checkpoint_utils.load_model_ensemble_and_task(load_model_input) - - if device is not None: - model[0].to(device) - - self.model = model[0] - self.model.eval() - - @property - def groups(self): - return 1 - - @torch.no_grad() - def forward( - self, - wav_input, - flatten=True, - input_sample_hz=None - ): - device = wav_input.device - - if exists(input_sample_hz): - wav_input = resample(wav_input, input_sample_hz, self.target_sample_hz) - - if exists(self.seq_len_multiple_of): - wav_input = curtail_to_multiple(wav_input, self.seq_len_multiple_of) - - embed = self.model( - wav_input, - features_only=True, - mask=False, # thanks to @maitycyrus for noticing that mask is defaulted to True in the fairseq code - output_layer=self.output_layer - ) - - embed, packed_shape = pack([embed['x']], '* d') - - # codebook_indices = self.kmeans.predict(embed.cpu().detach().numpy()) - - codebook_indices = torch.from_numpy(embed.cpu().detach().numpy()).to(device) # .long() - - if flatten: - return codebook_indices - - codebook_indices, = unpack(codebook_indices, packed_shape, '*') - return codebook_indices diff --git a/spaces/kevinwang676/VALLE/macros.py b/spaces/kevinwang676/VALLE/macros.py deleted file mode 100644 index b192fccde1a11da26cff026c9a08c8ff54915907..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VALLE/macros.py +++ /dev/null @@ -1,39 +0,0 @@ -NUM_LAYERS = 12 -NUM_HEAD = 16 -N_DIM = 1024 -PREFIX_MODE = 1 -NUM_QUANTIZERS = 8 -SAMPLE_RATE = 24000 - -lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - 'mix': "", -} - -lang2code = { - 'zh': 0, - 'ja': 1, - "en": 2, -} - -token2lang = { - '[ZH]': "zh", - '[JA]': "ja", - "[EN]": "en", - "": "mix" -} - -code2lang = { - 0: 'zh', - 1: 'ja', - 2: "en", -} - -langdropdown2token = { - 'English': "[EN]", - '中文': "[ZH]", - '日本語': "[JA]", - 'Mix': "", -} \ No newline at end of file diff --git a/spaces/kevinwang676/VoiceChanger/src/utils/hparams.py b/spaces/kevinwang676/VoiceChanger/src/utils/hparams.py deleted file mode 100644 index 743c5c7d5a5a9e686f1ccd6fb3c2fb5cb382d62b..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChanger/src/utils/hparams.py +++ /dev/null @@ -1,160 +0,0 @@ -from glob import glob -import os - -class HParams: - def __init__(self, **kwargs): - self.data = {} - - for key, value in kwargs.items(): - self.data[key] = value - - def __getattr__(self, key): - if key not in self.data: - raise AttributeError("'HParams' object has no attribute %s" % key) - return self.data[key] - - def set_hparam(self, key, value): - self.data[key] = value - - -# Default hyperparameters -hparams = HParams( - num_mels=80, # Number of mel-spectrogram channels and local conditioning dimensionality - # network - rescale=True, # Whether to rescale audio prior to preprocessing - rescaling_max=0.9, # Rescaling value - - # Use LWS (https://github.com/Jonathan-LeRoux/lws) for STFT and phase reconstruction - # It"s preferred to set True to use with https://github.com/r9y9/wavenet_vocoder - # Does not work if n_ffit is not multiple of hop_size!! - use_lws=False, - - n_fft=800, # Extra window size is filled with 0 paddings to match this parameter - hop_size=200, # For 16000Hz, 200 = 12.5 ms (0.0125 * sample_rate) - win_size=800, # For 16000Hz, 800 = 50 ms (If None, win_size = n_fft) (0.05 * sample_rate) - sample_rate=16000, # 16000Hz (corresponding to librispeech) (sox --i ) - - frame_shift_ms=None, # Can replace hop_size parameter. (Recommended: 12.5) - - # Mel and Linear spectrograms normalization/scaling and clipping - signal_normalization=True, - # Whether to normalize mel spectrograms to some predefined range (following below parameters) - allow_clipping_in_normalization=True, # Only relevant if mel_normalization = True - symmetric_mels=True, - # Whether to scale the data to be symmetric around 0. (Also multiplies the output range by 2, - # faster and cleaner convergence) - max_abs_value=4., - # max absolute value of data. If symmetric, data will be [-max, max] else [0, max] (Must not - # be too big to avoid gradient explosion, - # not too small for fast convergence) - # Contribution by @begeekmyfriend - # Spectrogram Pre-Emphasis (Lfilter: Reduce spectrogram noise and helps model certitude - # levels. Also allows for better G&L phase reconstruction) - preemphasize=True, # whether to apply filter - preemphasis=0.97, # filter coefficient. - - # Limits - min_level_db=-100, - ref_level_db=20, - fmin=55, - # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To - # test depending on dataset. Pitch info: male~[65, 260], female~[100, 525]) - fmax=7600, # To be increased/reduced depending on data. - - ###################### Our training parameters ################################# - img_size=96, - fps=25, - - batch_size=16, - initial_learning_rate=1e-4, - nepochs=300000, ### ctrl + c, stop whenever eval loss is consistently greater than train loss for ~10 epochs - num_workers=20, - checkpoint_interval=3000, - eval_interval=3000, - writer_interval=300, - save_optimizer_state=True, - - syncnet_wt=0.0, # is initially zero, will be set automatically to 0.03 later. Leads to faster convergence. - syncnet_batch_size=64, - syncnet_lr=1e-4, - syncnet_eval_interval=1000, - syncnet_checkpoint_interval=10000, - - disc_wt=0.07, - disc_initial_learning_rate=1e-4, -) - - - -# Default hyperparameters -hparamsdebug = HParams( - num_mels=80, # Number of mel-spectrogram channels and local conditioning dimensionality - # network - rescale=True, # Whether to rescale audio prior to preprocessing - rescaling_max=0.9, # Rescaling value - - # Use LWS (https://github.com/Jonathan-LeRoux/lws) for STFT and phase reconstruction - # It"s preferred to set True to use with https://github.com/r9y9/wavenet_vocoder - # Does not work if n_ffit is not multiple of hop_size!! - use_lws=False, - - n_fft=800, # Extra window size is filled with 0 paddings to match this parameter - hop_size=200, # For 16000Hz, 200 = 12.5 ms (0.0125 * sample_rate) - win_size=800, # For 16000Hz, 800 = 50 ms (If None, win_size = n_fft) (0.05 * sample_rate) - sample_rate=16000, # 16000Hz (corresponding to librispeech) (sox --i ) - - frame_shift_ms=None, # Can replace hop_size parameter. (Recommended: 12.5) - - # Mel and Linear spectrograms normalization/scaling and clipping - signal_normalization=True, - # Whether to normalize mel spectrograms to some predefined range (following below parameters) - allow_clipping_in_normalization=True, # Only relevant if mel_normalization = True - symmetric_mels=True, - # Whether to scale the data to be symmetric around 0. (Also multiplies the output range by 2, - # faster and cleaner convergence) - max_abs_value=4., - # max absolute value of data. If symmetric, data will be [-max, max] else [0, max] (Must not - # be too big to avoid gradient explosion, - # not too small for fast convergence) - # Contribution by @begeekmyfriend - # Spectrogram Pre-Emphasis (Lfilter: Reduce spectrogram noise and helps model certitude - # levels. Also allows for better G&L phase reconstruction) - preemphasize=True, # whether to apply filter - preemphasis=0.97, # filter coefficient. - - # Limits - min_level_db=-100, - ref_level_db=20, - fmin=55, - # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To - # test depending on dataset. Pitch info: male~[65, 260], female~[100, 525]) - fmax=7600, # To be increased/reduced depending on data. - - ###################### Our training parameters ################################# - img_size=96, - fps=25, - - batch_size=2, - initial_learning_rate=1e-3, - nepochs=100000, ### ctrl + c, stop whenever eval loss is consistently greater than train loss for ~10 epochs - num_workers=0, - checkpoint_interval=10000, - eval_interval=10, - writer_interval=5, - save_optimizer_state=True, - - syncnet_wt=0.0, # is initially zero, will be set automatically to 0.03 later. Leads to faster convergence. - syncnet_batch_size=64, - syncnet_lr=1e-4, - syncnet_eval_interval=10000, - syncnet_checkpoint_interval=10000, - - disc_wt=0.07, - disc_initial_learning_rate=1e-4, -) - - -def hparams_debug_string(): - values = hparams.values() - hp = [" %s: %s" % (name, values[name]) for name in sorted(values) if name != "sentences"] - return "Hyperparameters:\n" + "\n".join(hp) diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/util/detect_lm68.py b/spaces/kevinwang676/VoiceChangers/src/face3d/util/detect_lm68.py deleted file mode 100644 index b7e40997289e17405e1fb6c408d21adce7b626ce..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/face3d/util/detect_lm68.py +++ /dev/null @@ -1,106 +0,0 @@ -import os -import cv2 -import numpy as np -from scipy.io import loadmat -import tensorflow as tf -from util.preprocess import align_for_lm -from shutil import move - -mean_face = np.loadtxt('util/test_mean_face.txt') -mean_face = mean_face.reshape([68, 2]) - -def save_label(labels, save_path): - np.savetxt(save_path, labels) - -def draw_landmarks(img, landmark, save_name): - landmark = landmark - lm_img = np.zeros([img.shape[0], img.shape[1], 3]) - lm_img[:] = img.astype(np.float32) - landmark = np.round(landmark).astype(np.int32) - - for i in range(len(landmark)): - for j in range(-1, 1): - for k in range(-1, 1): - if img.shape[0] - 1 - landmark[i, 1]+j > 0 and \ - img.shape[0] - 1 - landmark[i, 1]+j < img.shape[0] and \ - landmark[i, 0]+k > 0 and \ - landmark[i, 0]+k < img.shape[1]: - lm_img[img.shape[0] - 1 - landmark[i, 1]+j, landmark[i, 0]+k, - :] = np.array([0, 0, 255]) - lm_img = lm_img.astype(np.uint8) - - cv2.imwrite(save_name, lm_img) - - -def load_data(img_name, txt_name): - return cv2.imread(img_name), np.loadtxt(txt_name) - -# create tensorflow graph for landmark detector -def load_lm_graph(graph_filename): - with tf.gfile.GFile(graph_filename, 'rb') as f: - graph_def = tf.GraphDef() - graph_def.ParseFromString(f.read()) - - with tf.Graph().as_default() as graph: - tf.import_graph_def(graph_def, name='net') - img_224 = graph.get_tensor_by_name('net/input_imgs:0') - output_lm = graph.get_tensor_by_name('net/lm:0') - lm_sess = tf.Session(graph=graph) - - return lm_sess,img_224,output_lm - -# landmark detection -def detect_68p(img_path,sess,input_op,output_op): - print('detecting landmarks......') - names = [i for i in sorted(os.listdir( - img_path)) if 'jpg' in i or 'png' in i or 'jpeg' in i or 'PNG' in i] - vis_path = os.path.join(img_path, 'vis') - remove_path = os.path.join(img_path, 'remove') - save_path = os.path.join(img_path, 'landmarks') - if not os.path.isdir(vis_path): - os.makedirs(vis_path) - if not os.path.isdir(remove_path): - os.makedirs(remove_path) - if not os.path.isdir(save_path): - os.makedirs(save_path) - - for i in range(0, len(names)): - name = names[i] - print('%05d' % (i), ' ', name) - full_image_name = os.path.join(img_path, name) - txt_name = '.'.join(name.split('.')[:-1]) + '.txt' - full_txt_name = os.path.join(img_path, 'detections', txt_name) # 5 facial landmark path for each image - - # if an image does not have detected 5 facial landmarks, remove it from the training list - if not os.path.isfile(full_txt_name): - move(full_image_name, os.path.join(remove_path, name)) - continue - - # load data - img, five_points = load_data(full_image_name, full_txt_name) - input_img, scale, bbox = align_for_lm(img, five_points) # align for 68 landmark detection - - # if the alignment fails, remove corresponding image from the training list - if scale == 0: - move(full_txt_name, os.path.join( - remove_path, txt_name)) - move(full_image_name, os.path.join(remove_path, name)) - continue - - # detect landmarks - input_img = np.reshape( - input_img, [1, 224, 224, 3]).astype(np.float32) - landmark = sess.run( - output_op, feed_dict={input_op: input_img}) - - # transform back to original image coordinate - landmark = landmark.reshape([68, 2]) + mean_face - landmark[:, 1] = 223 - landmark[:, 1] - landmark = landmark / scale - landmark[:, 0] = landmark[:, 0] + bbox[0] - landmark[:, 1] = landmark[:, 1] + bbox[1] - landmark[:, 1] = img.shape[0] - 1 - landmark[:, 1] - - if i % 100 == 0: - draw_landmarks(img, landmark, os.path.join(vis_path, name)) - save_label(landmark, os.path.join(save_path, txt_name)) diff --git a/spaces/kingabzpro/savtadepth/run_dev_env.sh b/spaces/kingabzpro/savtadepth/run_dev_env.sh deleted file mode 100644 index 62944a2d5ebbcb8a7d56331d38f44de47369c4bc..0000000000000000000000000000000000000000 --- a/spaces/kingabzpro/savtadepth/run_dev_env.sh +++ /dev/null @@ -1,7 +0,0 @@ -docker run -d \ - -p 8080:8080 \ - --name "dags-ml-workspace" -v "/${PWD}:/workspace" \ - --env AUTHENTICATE_VIA_JUPYTER="dagshub_savta" \ - --shm-size 2G \ - --restart always \ - dagshub/ml-workspace-minimal:latest diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/latent_depth/latent_depth_src/models/__init__.py b/spaces/koajoel/PolyFormer/fairseq/examples/latent_depth/latent_depth_src/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/latent_depth/latent_depth_src/models/latent_transformer.py b/spaces/koajoel/PolyFormer/fairseq/examples/latent_depth/latent_depth_src/models/latent_transformer.py deleted file mode 100644 index 6a825301a452bd935deafdaf78fa2427ca9a469e..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/latent_depth/latent_depth_src/models/latent_transformer.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Any, Dict, Optional - -import torch.nn as nn -from fairseq.models.fairseq_encoder import EncoderOut -from fairseq.models.transformer import TransformerDecoder, TransformerEncoder -from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer -from torch import Tensor - -from ..modules.latent_layers import LayerSelect - - -class LatentTransformerEncoder(TransformerEncoder): - """Latent depth (https://arxiv.org/abs/2009.13102) implemented in - TransformerEncoder. - """ - - def __init__(self, args, dictionary, embed_tokens, num_logits=1): - self.num_logits = num_logits - self.num_layers = args.encoder_layers - super().__init__(args, dictionary, embed_tokens) - self.layer_select = LayerSelect( - num_layers=self.num_layers, - num_logits=self.num_logits, - soft_select=getattr(args, "soft_select", False), - sampling_tau=getattr(args, "sampling_tau", 5.), - ) - self.lang_idx = None - self.layers = nn.ModuleList( - [self._build_encoder_layer(args, idx) for idx in range(args.encoder_layers)] - ) - - def set_lang_idx(self, lang_idx): - self.lang_idx = lang_idx - - def _build_encoder_layer(self, args, idx=None): - return LatentTransformerEncoderLayer(args, idx, layer_select=self.layer_select) - - def forward(self, src_tokens, src_lengths, return_all_hiddens: bool = False): - self.layer_select.sample(self.lang_idx) - return super().forward(src_tokens, src_lengths, return_all_hiddens) - - -class LatentTransformerEncoderLayer(TransformerEncoderLayer): - """Encoder layer with each (non_residual) block weighted by samples of Bernouli - or Gumbel Signmoid samples. - - Args: - args (argparse.Namespace): parsed command-line arguments from standard - TransformerEncoderLayer. - idx (int): layer index (used to retrieve samples). - layer_select (LayerSelect, optional): instance of LayerSelect module with logits - parameters and sampling method. - """ - - def __init__(self, args, idx, layer_select=None): - super().__init__(args) - self.idx = idx - self.layer_select = layer_select - - def residual_connection(self, x, residual): - return residual + x * self.layer_select(self.idx) - - -class LatentTransformerDecoder(TransformerDecoder): - """Latent depth (https://arxiv.org/abs/2009.13102) implemented in - TransformerDecoder. - """ - - def __init__( - self, args, dictionary, embed_tokens, no_encoder_attn=False, num_logits=1 - ): - self.num_logits = num_logits - self.num_layers = args.decoder_layers - super().__init__( - args, dictionary, embed_tokens, no_encoder_attn=no_encoder_attn - ) - self.layer_select = LayerSelect( - num_layers=self.num_layers, - num_logits=self.num_logits, - soft_select=getattr(args, "soft_select", False), - sampling_tau=getattr(args, "sampling_tau", 5.), - ) - self.lang_idx = None - self.layers = nn.ModuleList( - [ - self._build_decoder_layer(args, no_encoder_attn, idx) - for idx in range(args.decoder_layers) - ] - ) - - def set_lang_idx(self, lang_idx): - self.lang_idx = lang_idx - - def _build_decoder_layer(self, args, no_encoder_attn=False, idx=None): - return LatentTransformerDecoderLayer( - args, idx, layer_select=self.layer_select, no_encoder_attn=no_encoder_attn - ) - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[EncoderOut] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - features_only: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - src_lengths: Optional[Any] = None, - return_all_hiddens: bool = False, - ): - self.layer_select.sample(self.lang_idx) - return super().forward( - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - features_only=features_only, - alignment_layer=alignment_layer, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - - -class LatentTransformerDecoderLayer(TransformerDecoderLayer): - """Decoder layer with each (non_residual) block weighted by samples of Bernouli - or Gumbel Signmoid samples. - - Args: - args (argparse.Namespace): parsed command-line arguments from standard - TransformerDecoderLayer. - idx (int): layer index (used to retrieve samples). - layer_select (LayerSelect, optional): instance of LayerSelect module with logits - parameters and sampling method. - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - - """ - - def __init__( - self, - args, - idx, - layer_select=None, - no_encoder_attn=False, - add_bias_kv=False, - add_zero_attn=False, - ): - super().__init__(args, no_encoder_attn, add_bias_kv, add_zero_attn) - self.idx = idx - self.layer_select = layer_select - - def residual_connection(self, x, residual): - return residual + x * self.layer_select(self.idx) diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/op/upfirdn2d.py b/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/op/upfirdn2d.py deleted file mode 100644 index d509eb5e11e8cd01468dded5e5b53f5326057706..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/op/upfirdn2d.py +++ /dev/null @@ -1,61 +0,0 @@ -from collections import abc - -import torch -from torch.nn import functional as F - - -def upfirdn2d(inputs, kernel, up=1, down=1, pad=(0, 0)): - if not isinstance(up, abc.Iterable): - up = (up, up) - - if not isinstance(down, abc.Iterable): - down = (down, down) - - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - return upfirdn2d_native(inputs, kernel, *up, *down, *pad) - - -def upfirdn2d_native( - inputs, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = inputs.shape - inputs = inputs.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = inputs.shape - kernel_h, kernel_w = kernel.shape - - out = inputs.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x - - return out.view(-1, channel, out_h, out_w) \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ImageColor.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ImageColor.py deleted file mode 100644 index e184ed68da37404397dfd45c4af08c2a8fb78ac0..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ImageColor.py +++ /dev/null @@ -1,305 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# map CSS3-style colour description strings to RGB -# -# History: -# 2002-10-24 fl Added support for CSS-style color strings -# 2002-12-15 fl Added RGBA support -# 2004-03-27 fl Fixed remaining int() problems for Python 1.5.2 -# 2004-07-19 fl Fixed gray/grey spelling issues -# 2009-03-05 fl Fixed rounding error in grayscale calculation -# -# Copyright (c) 2002-2004 by Secret Labs AB -# Copyright (c) 2002-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import re - -from . import Image - - -def getrgb(color): - """ - Convert a color string to an RGB or RGBA tuple. If the string cannot be - parsed, this function raises a :py:exc:`ValueError` exception. - - .. versionadded:: 1.1.4 - - :param color: A color string - :return: ``(red, green, blue[, alpha])`` - """ - if len(color) > 100: - msg = "color specifier is too long" - raise ValueError(msg) - color = color.lower() - - rgb = colormap.get(color, None) - if rgb: - if isinstance(rgb, tuple): - return rgb - colormap[color] = rgb = getrgb(rgb) - return rgb - - # check for known string formats - if re.match("#[a-f0-9]{3}$", color): - return int(color[1] * 2, 16), int(color[2] * 2, 16), int(color[3] * 2, 16) - - if re.match("#[a-f0-9]{4}$", color): - return ( - int(color[1] * 2, 16), - int(color[2] * 2, 16), - int(color[3] * 2, 16), - int(color[4] * 2, 16), - ) - - if re.match("#[a-f0-9]{6}$", color): - return int(color[1:3], 16), int(color[3:5], 16), int(color[5:7], 16) - - if re.match("#[a-f0-9]{8}$", color): - return ( - int(color[1:3], 16), - int(color[3:5], 16), - int(color[5:7], 16), - int(color[7:9], 16), - ) - - m = re.match(r"rgb\(\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*\)$", color) - if m: - return int(m.group(1)), int(m.group(2)), int(m.group(3)) - - m = re.match(r"rgb\(\s*(\d+)%\s*,\s*(\d+)%\s*,\s*(\d+)%\s*\)$", color) - if m: - return ( - int((int(m.group(1)) * 255) / 100.0 + 0.5), - int((int(m.group(2)) * 255) / 100.0 + 0.5), - int((int(m.group(3)) * 255) / 100.0 + 0.5), - ) - - m = re.match( - r"hsl\(\s*(\d+\.?\d*)\s*,\s*(\d+\.?\d*)%\s*,\s*(\d+\.?\d*)%\s*\)$", color - ) - if m: - from colorsys import hls_to_rgb - - rgb = hls_to_rgb( - float(m.group(1)) / 360.0, - float(m.group(3)) / 100.0, - float(m.group(2)) / 100.0, - ) - return ( - int(rgb[0] * 255 + 0.5), - int(rgb[1] * 255 + 0.5), - int(rgb[2] * 255 + 0.5), - ) - - m = re.match( - r"hs[bv]\(\s*(\d+\.?\d*)\s*,\s*(\d+\.?\d*)%\s*,\s*(\d+\.?\d*)%\s*\)$", color - ) - if m: - from colorsys import hsv_to_rgb - - rgb = hsv_to_rgb( - float(m.group(1)) / 360.0, - float(m.group(2)) / 100.0, - float(m.group(3)) / 100.0, - ) - return ( - int(rgb[0] * 255 + 0.5), - int(rgb[1] * 255 + 0.5), - int(rgb[2] * 255 + 0.5), - ) - - m = re.match(r"rgba\(\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*\)$", color) - if m: - return int(m.group(1)), int(m.group(2)), int(m.group(3)), int(m.group(4)) - msg = f"unknown color specifier: {repr(color)}" - raise ValueError(msg) - - -def getcolor(color, mode): - """ - Same as :py:func:`~PIL.ImageColor.getrgb`, but converts the RGB value to a - greyscale value if ``mode`` is not color or a palette image. If the string - cannot be parsed, this function raises a :py:exc:`ValueError` exception. - - .. versionadded:: 1.1.4 - - :param color: A color string - :param mode: Convert result to this mode - :return: ``(graylevel[, alpha]) or (red, green, blue[, alpha])`` - """ - # same as getrgb, but converts the result to the given mode - color, alpha = getrgb(color), 255 - if len(color) == 4: - color, alpha = color[:3], color[3] - - if Image.getmodebase(mode) == "L": - r, g, b = color - # ITU-R Recommendation 601-2 for nonlinear RGB - # scaled to 24 bits to match the convert's implementation. - color = (r * 19595 + g * 38470 + b * 7471 + 0x8000) >> 16 - if mode[-1] == "A": - return color, alpha - else: - if mode[-1] == "A": - return color + (alpha,) - return color - - -colormap = { - # X11 colour table from https://drafts.csswg.org/css-color-4/, with - # gray/grey spelling issues fixed. This is a superset of HTML 4.0 - # colour names used in CSS 1. - "aliceblue": "#f0f8ff", - "antiquewhite": "#faebd7", - "aqua": "#00ffff", - "aquamarine": "#7fffd4", - "azure": "#f0ffff", - "beige": "#f5f5dc", - "bisque": "#ffe4c4", - "black": "#000000", - "blanchedalmond": "#ffebcd", - "blue": "#0000ff", - "blueviolet": "#8a2be2", - "brown": "#a52a2a", - "burlywood": "#deb887", - "cadetblue": "#5f9ea0", - "chartreuse": "#7fff00", - "chocolate": "#d2691e", - "coral": "#ff7f50", - "cornflowerblue": "#6495ed", - "cornsilk": "#fff8dc", - "crimson": "#dc143c", - "cyan": "#00ffff", - "darkblue": "#00008b", - "darkcyan": "#008b8b", - "darkgoldenrod": "#b8860b", - "darkgray": "#a9a9a9", - "darkgrey": "#a9a9a9", - "darkgreen": "#006400", - "darkkhaki": "#bdb76b", - "darkmagenta": "#8b008b", - "darkolivegreen": "#556b2f", - "darkorange": "#ff8c00", - "darkorchid": "#9932cc", - "darkred": "#8b0000", - "darksalmon": "#e9967a", - "darkseagreen": "#8fbc8f", - "darkslateblue": "#483d8b", - "darkslategray": "#2f4f4f", - "darkslategrey": "#2f4f4f", - "darkturquoise": "#00ced1", - "darkviolet": "#9400d3", - "deeppink": "#ff1493", - "deepskyblue": "#00bfff", - "dimgray": "#696969", - "dimgrey": "#696969", - "dodgerblue": "#1e90ff", - "firebrick": "#b22222", - "floralwhite": "#fffaf0", - "forestgreen": "#228b22", - "fuchsia": "#ff00ff", - "gainsboro": "#dcdcdc", - "ghostwhite": "#f8f8ff", - "gold": "#ffd700", - "goldenrod": "#daa520", - "gray": "#808080", - "grey": "#808080", - "green": "#008000", - "greenyellow": "#adff2f", - "honeydew": "#f0fff0", - "hotpink": "#ff69b4", - "indianred": "#cd5c5c", - "indigo": "#4b0082", - "ivory": "#fffff0", - "khaki": "#f0e68c", - "lavender": "#e6e6fa", - "lavenderblush": "#fff0f5", - "lawngreen": "#7cfc00", - "lemonchiffon": "#fffacd", - "lightblue": "#add8e6", - "lightcoral": "#f08080", - "lightcyan": "#e0ffff", - "lightgoldenrodyellow": "#fafad2", - "lightgreen": "#90ee90", - "lightgray": "#d3d3d3", - "lightgrey": "#d3d3d3", - "lightpink": "#ffb6c1", - "lightsalmon": "#ffa07a", - "lightseagreen": "#20b2aa", - "lightskyblue": "#87cefa", - "lightslategray": "#778899", - "lightslategrey": "#778899", - "lightsteelblue": "#b0c4de", - "lightyellow": "#ffffe0", - "lime": "#00ff00", - "limegreen": "#32cd32", - "linen": "#faf0e6", - "magenta": "#ff00ff", - "maroon": "#800000", - "mediumaquamarine": "#66cdaa", - "mediumblue": "#0000cd", - "mediumorchid": "#ba55d3", - "mediumpurple": "#9370db", - "mediumseagreen": "#3cb371", - "mediumslateblue": "#7b68ee", - "mediumspringgreen": "#00fa9a", - "mediumturquoise": "#48d1cc", - "mediumvioletred": "#c71585", - "midnightblue": "#191970", - "mintcream": "#f5fffa", - "mistyrose": "#ffe4e1", - "moccasin": "#ffe4b5", - "navajowhite": "#ffdead", - "navy": "#000080", - "oldlace": "#fdf5e6", - "olive": "#808000", - "olivedrab": "#6b8e23", - "orange": "#ffa500", - "orangered": "#ff4500", - "orchid": "#da70d6", - "palegoldenrod": "#eee8aa", - "palegreen": "#98fb98", - "paleturquoise": "#afeeee", - "palevioletred": "#db7093", - "papayawhip": "#ffefd5", - "peachpuff": "#ffdab9", - "peru": "#cd853f", - "pink": "#ffc0cb", - "plum": "#dda0dd", - "powderblue": "#b0e0e6", - "purple": "#800080", - "rebeccapurple": "#663399", - "red": "#ff0000", - "rosybrown": "#bc8f8f", - "royalblue": "#4169e1", - "saddlebrown": "#8b4513", - "salmon": "#fa8072", - "sandybrown": "#f4a460", - "seagreen": "#2e8b57", - "seashell": "#fff5ee", - "sienna": "#a0522d", - "silver": "#c0c0c0", - "skyblue": "#87ceeb", - "slateblue": "#6a5acd", - "slategray": "#708090", - "slategrey": "#708090", - "snow": "#fffafa", - "springgreen": "#00ff7f", - "steelblue": "#4682b4", - "tan": "#d2b48c", - "teal": "#008080", - "thistle": "#d8bfd8", - "tomato": "#ff6347", - "turquoise": "#40e0d0", - "violet": "#ee82ee", - "wheat": "#f5deb3", - "white": "#ffffff", - "whitesmoke": "#f5f5f5", - "yellow": "#ffff00", - "yellowgreen": "#9acd32", -} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/cookiejar.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/cookiejar.py deleted file mode 100644 index 6c88b47e3583430e05ea671af5b6da2a557073ec..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/cookiejar.py +++ /dev/null @@ -1,415 +0,0 @@ -import asyncio -import contextlib -import datetime -import os # noqa -import pathlib -import pickle -import re -from collections import defaultdict -from http.cookies import BaseCookie, Morsel, SimpleCookie -from typing import ( # noqa - DefaultDict, - Dict, - Iterable, - Iterator, - List, - Mapping, - Optional, - Set, - Tuple, - Union, - cast, -) - -from yarl import URL - -from .abc import AbstractCookieJar, ClearCookiePredicate -from .helpers import is_ip_address, next_whole_second -from .typedefs import LooseCookies, PathLike, StrOrURL - -__all__ = ("CookieJar", "DummyCookieJar") - - -CookieItem = Union[str, "Morsel[str]"] - - -class CookieJar(AbstractCookieJar): - """Implements cookie storage adhering to RFC 6265.""" - - DATE_TOKENS_RE = re.compile( - r"[\x09\x20-\x2F\x3B-\x40\x5B-\x60\x7B-\x7E]*" - r"(?P[\x00-\x08\x0A-\x1F\d:a-zA-Z\x7F-\xFF]+)" - ) - - DATE_HMS_TIME_RE = re.compile(r"(\d{1,2}):(\d{1,2}):(\d{1,2})") - - DATE_DAY_OF_MONTH_RE = re.compile(r"(\d{1,2})") - - DATE_MONTH_RE = re.compile( - "(jan)|(feb)|(mar)|(apr)|(may)|(jun)|(jul)|" "(aug)|(sep)|(oct)|(nov)|(dec)", - re.I, - ) - - DATE_YEAR_RE = re.compile(r"(\d{2,4})") - - MAX_TIME = datetime.datetime.max.replace(tzinfo=datetime.timezone.utc) - - MAX_32BIT_TIME = datetime.datetime.utcfromtimestamp(2**31 - 1) - - def __init__( - self, - *, - unsafe: bool = False, - quote_cookie: bool = True, - treat_as_secure_origin: Union[StrOrURL, List[StrOrURL], None] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> None: - super().__init__(loop=loop) - self._cookies: DefaultDict[Tuple[str, str], SimpleCookie[str]] = defaultdict( - SimpleCookie - ) - self._host_only_cookies: Set[Tuple[str, str]] = set() - self._unsafe = unsafe - self._quote_cookie = quote_cookie - if treat_as_secure_origin is None: - treat_as_secure_origin = [] - elif isinstance(treat_as_secure_origin, URL): - treat_as_secure_origin = [treat_as_secure_origin.origin()] - elif isinstance(treat_as_secure_origin, str): - treat_as_secure_origin = [URL(treat_as_secure_origin).origin()] - else: - treat_as_secure_origin = [ - URL(url).origin() if isinstance(url, str) else url.origin() - for url in treat_as_secure_origin - ] - self._treat_as_secure_origin = treat_as_secure_origin - self._next_expiration = next_whole_second() - self._expirations: Dict[Tuple[str, str, str], datetime.datetime] = {} - # #4515: datetime.max may not be representable on 32-bit platforms - self._max_time = self.MAX_TIME - try: - self._max_time.timestamp() - except OverflowError: - self._max_time = self.MAX_32BIT_TIME - - def save(self, file_path: PathLike) -> None: - file_path = pathlib.Path(file_path) - with file_path.open(mode="wb") as f: - pickle.dump(self._cookies, f, pickle.HIGHEST_PROTOCOL) - - def load(self, file_path: PathLike) -> None: - file_path = pathlib.Path(file_path) - with file_path.open(mode="rb") as f: - self._cookies = pickle.load(f) - - def clear(self, predicate: Optional[ClearCookiePredicate] = None) -> None: - if predicate is None: - self._next_expiration = next_whole_second() - self._cookies.clear() - self._host_only_cookies.clear() - self._expirations.clear() - return - - to_del = [] - now = datetime.datetime.now(datetime.timezone.utc) - for (domain, path), cookie in self._cookies.items(): - for name, morsel in cookie.items(): - key = (domain, path, name) - if ( - key in self._expirations and self._expirations[key] <= now - ) or predicate(morsel): - to_del.append(key) - - for domain, path, name in to_del: - self._host_only_cookies.discard((domain, name)) - key = (domain, path, name) - if key in self._expirations: - del self._expirations[(domain, path, name)] - self._cookies[(domain, path)].pop(name, None) - - next_expiration = min(self._expirations.values(), default=self._max_time) - try: - self._next_expiration = next_expiration.replace( - microsecond=0 - ) + datetime.timedelta(seconds=1) - except OverflowError: - self._next_expiration = self._max_time - - def clear_domain(self, domain: str) -> None: - self.clear(lambda x: self._is_domain_match(domain, x["domain"])) - - def __iter__(self) -> "Iterator[Morsel[str]]": - self._do_expiration() - for val in self._cookies.values(): - yield from val.values() - - def __len__(self) -> int: - return sum(1 for i in self) - - def _do_expiration(self) -> None: - self.clear(lambda x: False) - - def _expire_cookie( - self, when: datetime.datetime, domain: str, path: str, name: str - ) -> None: - self._next_expiration = min(self._next_expiration, when) - self._expirations[(domain, path, name)] = when - - def update_cookies(self, cookies: LooseCookies, response_url: URL = URL()) -> None: - """Update cookies.""" - hostname = response_url.raw_host - - if not self._unsafe and is_ip_address(hostname): - # Don't accept cookies from IPs - return - - if isinstance(cookies, Mapping): - cookies = cookies.items() - - for name, cookie in cookies: - if not isinstance(cookie, Morsel): - tmp: SimpleCookie[str] = SimpleCookie() - tmp[name] = cookie # type: ignore[assignment] - cookie = tmp[name] - - domain = cookie["domain"] - - # ignore domains with trailing dots - if domain.endswith("."): - domain = "" - del cookie["domain"] - - if not domain and hostname is not None: - # Set the cookie's domain to the response hostname - # and set its host-only-flag - self._host_only_cookies.add((hostname, name)) - domain = cookie["domain"] = hostname - - if domain.startswith("."): - # Remove leading dot - domain = domain[1:] - cookie["domain"] = domain - - if hostname and not self._is_domain_match(domain, hostname): - # Setting cookies for different domains is not allowed - continue - - path = cookie["path"] - if not path or not path.startswith("/"): - # Set the cookie's path to the response path - path = response_url.path - if not path.startswith("/"): - path = "/" - else: - # Cut everything from the last slash to the end - path = "/" + path[1 : path.rfind("/")] - cookie["path"] = path - - max_age = cookie["max-age"] - if max_age: - try: - delta_seconds = int(max_age) - try: - max_age_expiration = datetime.datetime.now( - datetime.timezone.utc - ) + datetime.timedelta(seconds=delta_seconds) - except OverflowError: - max_age_expiration = self._max_time - self._expire_cookie(max_age_expiration, domain, path, name) - except ValueError: - cookie["max-age"] = "" - - else: - expires = cookie["expires"] - if expires: - expire_time = self._parse_date(expires) - if expire_time: - self._expire_cookie(expire_time, domain, path, name) - else: - cookie["expires"] = "" - - self._cookies[(domain, path)][name] = cookie - - self._do_expiration() - - def filter_cookies( - self, request_url: URL = URL() - ) -> Union["BaseCookie[str]", "SimpleCookie[str]"]: - """Returns this jar's cookies filtered by their attributes.""" - self._do_expiration() - request_url = URL(request_url) - filtered: Union["SimpleCookie[str]", "BaseCookie[str]"] = ( - SimpleCookie() if self._quote_cookie else BaseCookie() - ) - hostname = request_url.raw_host or "" - request_origin = URL() - with contextlib.suppress(ValueError): - request_origin = request_url.origin() - - is_not_secure = ( - request_url.scheme not in ("https", "wss") - and request_origin not in self._treat_as_secure_origin - ) - - for cookie in self: - name = cookie.key - domain = cookie["domain"] - - # Send shared cookies - if not domain: - filtered[name] = cookie.value - continue - - if not self._unsafe and is_ip_address(hostname): - continue - - if (domain, name) in self._host_only_cookies: - if domain != hostname: - continue - elif not self._is_domain_match(domain, hostname): - continue - - if not self._is_path_match(request_url.path, cookie["path"]): - continue - - if is_not_secure and cookie["secure"]: - continue - - # It's critical we use the Morsel so the coded_value - # (based on cookie version) is preserved - mrsl_val = cast("Morsel[str]", cookie.get(cookie.key, Morsel())) - mrsl_val.set(cookie.key, cookie.value, cookie.coded_value) - filtered[name] = mrsl_val - - return filtered - - @staticmethod - def _is_domain_match(domain: str, hostname: str) -> bool: - """Implements domain matching adhering to RFC 6265.""" - if hostname == domain: - return True - - if not hostname.endswith(domain): - return False - - non_matching = hostname[: -len(domain)] - - if not non_matching.endswith("."): - return False - - return not is_ip_address(hostname) - - @staticmethod - def _is_path_match(req_path: str, cookie_path: str) -> bool: - """Implements path matching adhering to RFC 6265.""" - if not req_path.startswith("/"): - req_path = "/" - - if req_path == cookie_path: - return True - - if not req_path.startswith(cookie_path): - return False - - if cookie_path.endswith("/"): - return True - - non_matching = req_path[len(cookie_path) :] - - return non_matching.startswith("/") - - @classmethod - def _parse_date(cls, date_str: str) -> Optional[datetime.datetime]: - """Implements date string parsing adhering to RFC 6265.""" - if not date_str: - return None - - found_time = False - found_day = False - found_month = False - found_year = False - - hour = minute = second = 0 - day = 0 - month = 0 - year = 0 - - for token_match in cls.DATE_TOKENS_RE.finditer(date_str): - - token = token_match.group("token") - - if not found_time: - time_match = cls.DATE_HMS_TIME_RE.match(token) - if time_match: - found_time = True - hour, minute, second = (int(s) for s in time_match.groups()) - continue - - if not found_day: - day_match = cls.DATE_DAY_OF_MONTH_RE.match(token) - if day_match: - found_day = True - day = int(day_match.group()) - continue - - if not found_month: - month_match = cls.DATE_MONTH_RE.match(token) - if month_match: - found_month = True - assert month_match.lastindex is not None - month = month_match.lastindex - continue - - if not found_year: - year_match = cls.DATE_YEAR_RE.match(token) - if year_match: - found_year = True - year = int(year_match.group()) - - if 70 <= year <= 99: - year += 1900 - elif 0 <= year <= 69: - year += 2000 - - if False in (found_day, found_month, found_year, found_time): - return None - - if not 1 <= day <= 31: - return None - - if year < 1601 or hour > 23 or minute > 59 or second > 59: - return None - - return datetime.datetime( - year, month, day, hour, minute, second, tzinfo=datetime.timezone.utc - ) - - -class DummyCookieJar(AbstractCookieJar): - """Implements a dummy cookie storage. - - It can be used with the ClientSession when no cookie processing is needed. - - """ - - def __init__(self, *, loop: Optional[asyncio.AbstractEventLoop] = None) -> None: - super().__init__(loop=loop) - - def __iter__(self) -> "Iterator[Morsel[str]]": - while False: - yield None - - def __len__(self) -> int: - return 0 - - def clear(self, predicate: Optional[ClearCookiePredicate] = None) -> None: - pass - - def clear_domain(self, domain: str) -> None: - pass - - def update_cookies(self, cookies: LooseCookies, response_url: URL = URL()) -> None: - pass - - def filter_cookies(self, request_url: URL) -> "BaseCookie[str]": - return SimpleCookie() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/varLib/instancer/names.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/varLib/instancer/names.py deleted file mode 100644 index dad3fd7e57d86dff555818ee14e8239cf73435fe..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/varLib/instancer/names.py +++ /dev/null @@ -1,380 +0,0 @@ -"""Helpers for instantiating name table records.""" - -from contextlib import contextmanager -from copy import deepcopy -from enum import IntEnum -import re - - -class NameID(IntEnum): - FAMILY_NAME = 1 - SUBFAMILY_NAME = 2 - UNIQUE_FONT_IDENTIFIER = 3 - FULL_FONT_NAME = 4 - VERSION_STRING = 5 - POSTSCRIPT_NAME = 6 - TYPOGRAPHIC_FAMILY_NAME = 16 - TYPOGRAPHIC_SUBFAMILY_NAME = 17 - VARIATIONS_POSTSCRIPT_NAME_PREFIX = 25 - - -ELIDABLE_AXIS_VALUE_NAME = 2 - - -def getVariationNameIDs(varfont): - used = [] - if "fvar" in varfont: - fvar = varfont["fvar"] - for axis in fvar.axes: - used.append(axis.axisNameID) - for instance in fvar.instances: - used.append(instance.subfamilyNameID) - if instance.postscriptNameID != 0xFFFF: - used.append(instance.postscriptNameID) - if "STAT" in varfont: - stat = varfont["STAT"].table - for axis in stat.DesignAxisRecord.Axis if stat.DesignAxisRecord else (): - used.append(axis.AxisNameID) - for value in stat.AxisValueArray.AxisValue if stat.AxisValueArray else (): - used.append(value.ValueNameID) - elidedFallbackNameID = getattr(stat, "ElidedFallbackNameID", None) - if elidedFallbackNameID is not None: - used.append(elidedFallbackNameID) - # nameIDs <= 255 are reserved by OT spec so we don't touch them - return {nameID for nameID in used if nameID > 255} - - -@contextmanager -def pruningUnusedNames(varfont): - from . import log - - origNameIDs = getVariationNameIDs(varfont) - - yield - - log.info("Pruning name table") - exclude = origNameIDs - getVariationNameIDs(varfont) - varfont["name"].names[:] = [ - record for record in varfont["name"].names if record.nameID not in exclude - ] - if "ltag" in varfont: - # Drop the whole 'ltag' table if all the language-dependent Unicode name - # records that reference it have been dropped. - # TODO: Only prune unused ltag tags, renumerating langIDs accordingly. - # Note ltag can also be used by feat or morx tables, so check those too. - if not any( - record - for record in varfont["name"].names - if record.platformID == 0 and record.langID != 0xFFFF - ): - del varfont["ltag"] - - -def updateNameTable(varfont, axisLimits): - """Update instatiated variable font's name table using STAT AxisValues. - - Raises ValueError if the STAT table is missing or an Axis Value table is - missing for requested axis locations. - - First, collect all STAT AxisValues that match the new default axis locations - (excluding "elided" ones); concatenate the strings in design axis order, - while giving priority to "synthetic" values (Format 4), to form the - typographic subfamily name associated with the new default instance. - Finally, update all related records in the name table, making sure that - legacy family/sub-family names conform to the the R/I/B/BI (Regular, Italic, - Bold, Bold Italic) naming model. - - Example: Updating a partial variable font: - | >>> ttFont = TTFont("OpenSans[wdth,wght].ttf") - | >>> updateNameTable(ttFont, {"wght": (400, 900), "wdth": 75}) - - The name table records will be updated in the following manner: - NameID 1 familyName: "Open Sans" --> "Open Sans Condensed" - NameID 2 subFamilyName: "Regular" --> "Regular" - NameID 3 Unique font identifier: "3.000;GOOG;OpenSans-Regular" --> \ - "3.000;GOOG;OpenSans-Condensed" - NameID 4 Full font name: "Open Sans Regular" --> "Open Sans Condensed" - NameID 6 PostScript name: "OpenSans-Regular" --> "OpenSans-Condensed" - NameID 16 Typographic Family name: None --> "Open Sans" - NameID 17 Typographic Subfamily name: None --> "Condensed" - - References: - https://docs.microsoft.com/en-us/typography/opentype/spec/stat - https://docs.microsoft.com/en-us/typography/opentype/spec/name#name-ids - """ - from . import AxisLimits, axisValuesFromAxisLimits - - if "STAT" not in varfont: - raise ValueError("Cannot update name table since there is no STAT table.") - stat = varfont["STAT"].table - if not stat.AxisValueArray: - raise ValueError("Cannot update name table since there are no STAT Axis Values") - fvar = varfont["fvar"] - - # The updated name table will reflect the new 'zero origin' of the font. - # If we're instantiating a partial font, we will populate the unpinned - # axes with their default axis values from fvar. - axisLimits = AxisLimits(axisLimits).limitAxesAndPopulateDefaults(varfont) - partialDefaults = axisLimits.defaultLocation() - fvarDefaults = {a.axisTag: a.defaultValue for a in fvar.axes} - defaultAxisCoords = AxisLimits({**fvarDefaults, **partialDefaults}) - assert all(v.minimum == v.maximum for v in defaultAxisCoords.values()) - - axisValueTables = axisValuesFromAxisLimits(stat, defaultAxisCoords) - checkAxisValuesExist(stat, axisValueTables, defaultAxisCoords.pinnedLocation()) - - # ignore "elidable" axis values, should be omitted in application font menus. - axisValueTables = [ - v for v in axisValueTables if not v.Flags & ELIDABLE_AXIS_VALUE_NAME - ] - axisValueTables = _sortAxisValues(axisValueTables) - _updateNameRecords(varfont, axisValueTables) - - -def checkAxisValuesExist(stat, axisValues, axisCoords): - seen = set() - designAxes = stat.DesignAxisRecord.Axis - for axisValueTable in axisValues: - axisValueFormat = axisValueTable.Format - if axisValueTable.Format in (1, 2, 3): - axisTag = designAxes[axisValueTable.AxisIndex].AxisTag - if axisValueFormat == 2: - axisValue = axisValueTable.NominalValue - else: - axisValue = axisValueTable.Value - if axisTag in axisCoords and axisValue == axisCoords[axisTag]: - seen.add(axisTag) - elif axisValueTable.Format == 4: - for rec in axisValueTable.AxisValueRecord: - axisTag = designAxes[rec.AxisIndex].AxisTag - if axisTag in axisCoords and rec.Value == axisCoords[axisTag]: - seen.add(axisTag) - - missingAxes = set(axisCoords) - seen - if missingAxes: - missing = ", ".join(f"'{i}': {axisCoords[i]}" for i in missingAxes) - raise ValueError(f"Cannot find Axis Values {{{missing}}}") - - -def _sortAxisValues(axisValues): - # Sort by axis index, remove duplicates and ensure that format 4 AxisValues - # are dominant. - # The MS Spec states: "if a format 1, format 2 or format 3 table has a - # (nominal) value used in a format 4 table that also has values for - # other axes, the format 4 table, being the more specific match, is used", - # https://docs.microsoft.com/en-us/typography/opentype/spec/stat#axis-value-table-format-4 - results = [] - seenAxes = set() - # Sort format 4 axes so the tables with the most AxisValueRecords are first - format4 = sorted( - [v for v in axisValues if v.Format == 4], - key=lambda v: len(v.AxisValueRecord), - reverse=True, - ) - - for val in format4: - axisIndexes = set(r.AxisIndex for r in val.AxisValueRecord) - minIndex = min(axisIndexes) - if not seenAxes & axisIndexes: - seenAxes |= axisIndexes - results.append((minIndex, val)) - - for val in axisValues: - if val in format4: - continue - axisIndex = val.AxisIndex - if axisIndex not in seenAxes: - seenAxes.add(axisIndex) - results.append((axisIndex, val)) - - return [axisValue for _, axisValue in sorted(results)] - - -def _updateNameRecords(varfont, axisValues): - # Update nametable based on the axisValues using the R/I/B/BI model. - nametable = varfont["name"] - stat = varfont["STAT"].table - - axisValueNameIDs = [a.ValueNameID for a in axisValues] - ribbiNameIDs = [n for n in axisValueNameIDs if _isRibbi(nametable, n)] - nonRibbiNameIDs = [n for n in axisValueNameIDs if n not in ribbiNameIDs] - elidedNameID = stat.ElidedFallbackNameID - elidedNameIsRibbi = _isRibbi(nametable, elidedNameID) - - getName = nametable.getName - platforms = set((r.platformID, r.platEncID, r.langID) for r in nametable.names) - for platform in platforms: - if not all(getName(i, *platform) for i in (1, 2, elidedNameID)): - # Since no family name and subfamily name records were found, - # we cannot update this set of name Records. - continue - - subFamilyName = " ".join( - getName(n, *platform).toUnicode() for n in ribbiNameIDs - ) - if nonRibbiNameIDs: - typoSubFamilyName = " ".join( - getName(n, *platform).toUnicode() for n in axisValueNameIDs - ) - else: - typoSubFamilyName = None - - # If neither subFamilyName and typographic SubFamilyName exist, - # we will use the STAT's elidedFallbackName - if not typoSubFamilyName and not subFamilyName: - if elidedNameIsRibbi: - subFamilyName = getName(elidedNameID, *platform).toUnicode() - else: - typoSubFamilyName = getName(elidedNameID, *platform).toUnicode() - - familyNameSuffix = " ".join( - getName(n, *platform).toUnicode() for n in nonRibbiNameIDs - ) - - _updateNameTableStyleRecords( - varfont, - familyNameSuffix, - subFamilyName, - typoSubFamilyName, - *platform, - ) - - -def _isRibbi(nametable, nameID): - englishRecord = nametable.getName(nameID, 3, 1, 0x409) - return ( - True - if englishRecord is not None - and englishRecord.toUnicode() in ("Regular", "Italic", "Bold", "Bold Italic") - else False - ) - - -def _updateNameTableStyleRecords( - varfont, - familyNameSuffix, - subFamilyName, - typoSubFamilyName, - platformID=3, - platEncID=1, - langID=0x409, -): - # TODO (Marc F) It may be nice to make this part a standalone - # font renamer in the future. - nametable = varfont["name"] - platform = (platformID, platEncID, langID) - - currentFamilyName = nametable.getName( - NameID.TYPOGRAPHIC_FAMILY_NAME, *platform - ) or nametable.getName(NameID.FAMILY_NAME, *platform) - - currentStyleName = nametable.getName( - NameID.TYPOGRAPHIC_SUBFAMILY_NAME, *platform - ) or nametable.getName(NameID.SUBFAMILY_NAME, *platform) - - if not all([currentFamilyName, currentStyleName]): - raise ValueError(f"Missing required NameIDs 1 and 2 for platform {platform}") - - currentFamilyName = currentFamilyName.toUnicode() - currentStyleName = currentStyleName.toUnicode() - - nameIDs = { - NameID.FAMILY_NAME: currentFamilyName, - NameID.SUBFAMILY_NAME: subFamilyName or "Regular", - } - if typoSubFamilyName: - nameIDs[NameID.FAMILY_NAME] = f"{currentFamilyName} {familyNameSuffix}".strip() - nameIDs[NameID.TYPOGRAPHIC_FAMILY_NAME] = currentFamilyName - nameIDs[NameID.TYPOGRAPHIC_SUBFAMILY_NAME] = typoSubFamilyName - else: - # Remove previous Typographic Family and SubFamily names since they're - # no longer required - for nameID in ( - NameID.TYPOGRAPHIC_FAMILY_NAME, - NameID.TYPOGRAPHIC_SUBFAMILY_NAME, - ): - nametable.removeNames(nameID=nameID) - - newFamilyName = ( - nameIDs.get(NameID.TYPOGRAPHIC_FAMILY_NAME) or nameIDs[NameID.FAMILY_NAME] - ) - newStyleName = ( - nameIDs.get(NameID.TYPOGRAPHIC_SUBFAMILY_NAME) or nameIDs[NameID.SUBFAMILY_NAME] - ) - - nameIDs[NameID.FULL_FONT_NAME] = f"{newFamilyName} {newStyleName}" - nameIDs[NameID.POSTSCRIPT_NAME] = _updatePSNameRecord( - varfont, newFamilyName, newStyleName, platform - ) - - uniqueID = _updateUniqueIdNameRecord(varfont, nameIDs, platform) - if uniqueID: - nameIDs[NameID.UNIQUE_FONT_IDENTIFIER] = uniqueID - - for nameID, string in nameIDs.items(): - assert string, nameID - nametable.setName(string, nameID, *platform) - - if "fvar" not in varfont: - nametable.removeNames(NameID.VARIATIONS_POSTSCRIPT_NAME_PREFIX) - - -def _updatePSNameRecord(varfont, familyName, styleName, platform): - # Implementation based on Adobe Technical Note #5902 : - # https://wwwimages2.adobe.com/content/dam/acom/en/devnet/font/pdfs/5902.AdobePSNameGeneration.pdf - nametable = varfont["name"] - - family_prefix = nametable.getName( - NameID.VARIATIONS_POSTSCRIPT_NAME_PREFIX, *platform - ) - if family_prefix: - family_prefix = family_prefix.toUnicode() - else: - family_prefix = familyName - - psName = f"{family_prefix}-{styleName}" - # Remove any characters other than uppercase Latin letters, lowercase - # Latin letters, digits and hyphens. - psName = re.sub(r"[^A-Za-z0-9-]", r"", psName) - - if len(psName) > 127: - # Abbreviating the stylename so it fits within 127 characters whilst - # conforming to every vendor's specification is too complex. Instead - # we simply truncate the psname and add the required "..." - return f"{psName[:124]}..." - return psName - - -def _updateUniqueIdNameRecord(varfont, nameIDs, platform): - nametable = varfont["name"] - currentRecord = nametable.getName(NameID.UNIQUE_FONT_IDENTIFIER, *platform) - if not currentRecord: - return None - - # Check if full name and postscript name are a substring of currentRecord - for nameID in (NameID.FULL_FONT_NAME, NameID.POSTSCRIPT_NAME): - nameRecord = nametable.getName(nameID, *platform) - if not nameRecord: - continue - if nameRecord.toUnicode() in currentRecord.toUnicode(): - return currentRecord.toUnicode().replace( - nameRecord.toUnicode(), nameIDs[nameRecord.nameID] - ) - - # Create a new string since we couldn't find any substrings. - fontVersion = _fontVersion(varfont, platform) - achVendID = varfont["OS/2"].achVendID - # Remove non-ASCII characers and trailing spaces - vendor = re.sub(r"[^\x00-\x7F]", "", achVendID).strip() - psName = nameIDs[NameID.POSTSCRIPT_NAME] - return f"{fontVersion};{vendor};{psName}" - - -def _fontVersion(font, platform=(3, 1, 0x409)): - nameRecord = font["name"].getName(NameID.VERSION_STRING, *platform) - if nameRecord is None: - return f'{font["head"].fontRevision:.3f}' - # "Version 1.101; ttfautohint (v1.8.1.43-b0c9)" --> "1.101" - # Also works fine with inputs "Version 1.101" or "1.101" etc - versionNumber = nameRecord.toUnicode().split(";")[0] - return versionNumber.lstrip("Version ").strip() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/__init__.py deleted file mode 100644 index e12b4ae2ed2bb6371c15409ce3f619251a8833d1..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the BSD-style license found in the -# LICENSE file in the root directory of this source tree. -import torch -from . import _C - -# Top-level APIs. Please think carefully before adding something to the -# top-level namespace: -# - private helper functions should go into torch._functorch -# - very experimental things should go into functorch.experimental -# - compilation related things should go into functorch.compile - -# Was never documented -from torch._functorch.python_key import make_fx - -from torch._functorch.deprecated import ( - vmap, grad, grad_and_value, vjp, jvp, jacrev, jacfwd, hessian, functionalize, - make_functional, make_functional_with_buffers, combine_state_for_ensemble, -) - -# utilities. Maybe these should go in their own namespace in the future? -from torch._functorch.make_functional import ( - FunctionalModule, - FunctionalModuleWithBuffers, -) - -__version__ = torch.__version__ diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/_commonjsHelpers-042e6b4d.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/_commonjsHelpers-042e6b4d.js deleted file mode 100644 index 98f950c798763b4a338d898c22552b9b729b56bb..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/_commonjsHelpers-042e6b4d.js +++ /dev/null @@ -1,2 +0,0 @@ -var f=typeof globalThis<"u"?globalThis:typeof window<"u"?window:typeof global<"u"?global:typeof self<"u"?self:{};function l(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}function a(e){if(e.__esModule)return e;var r=e.default;if(typeof r=="function"){var o=function n(){if(this instanceof n){var t=[null];t.push.apply(t,arguments);var u=Function.bind.apply(r,t);return new u}return r.apply(this,arguments)};o.prototype=r.prototype}else o={};return Object.defineProperty(o,"__esModule",{value:!0}),Object.keys(e).forEach(function(n){var t=Object.getOwnPropertyDescriptor(e,n);Object.defineProperty(o,n,t.get?t:{enumerable:!0,get:function(){return e[n]}})}),o}export{a,f as c,l as g}; -//# sourceMappingURL=_commonjsHelpers-042e6b4d.js.map diff --git a/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/common.py b/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/common.py deleted file mode 100644 index 497a00444c4c59725001993a63fe4617e9d323c8..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/common.py +++ /dev/null @@ -1,299 +0,0 @@ -# This file contains modules common to various models - -import math - -import numpy as np -import torch -from torch import nn - -from facelib.detection.yolov5face.utils.datasets import letterbox -from facelib.detection.yolov5face.utils.general import ( - make_divisible, - non_max_suppression, - scale_coords, - xyxy2xywh, -) - - -def autopad(k, p=None): # kernel, padding - # Pad to 'same' - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -def channel_shuffle(x, groups): - batchsize, num_channels, height, width = x.data.size() - channels_per_group = torch.div(num_channels, groups, rounding_mode="trunc") - - # reshape - x = x.view(batchsize, groups, channels_per_group, height, width) - x = torch.transpose(x, 1, 2).contiguous() - - # flatten - return x.view(batchsize, -1, height, width) - - -def DWConv(c1, c2, k=1, s=1, act=True): - # Depthwise convolution - return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act) - - -class Conv(nn.Module): - # Standard convolution - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def fuseforward(self, x): - return self.act(self.conv(x)) - - -class StemBlock(nn.Module): - def __init__(self, c1, c2, k=3, s=2, p=None, g=1, act=True): - super().__init__() - self.stem_1 = Conv(c1, c2, k, s, p, g, act) - self.stem_2a = Conv(c2, c2 // 2, 1, 1, 0) - self.stem_2b = Conv(c2 // 2, c2, 3, 2, 1) - self.stem_2p = nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True) - self.stem_3 = Conv(c2 * 2, c2, 1, 1, 0) - - def forward(self, x): - stem_1_out = self.stem_1(x) - stem_2a_out = self.stem_2a(stem_1_out) - stem_2b_out = self.stem_2b(stem_2a_out) - stem_2p_out = self.stem_2p(stem_1_out) - return self.stem_3(torch.cat((stem_2b_out, stem_2p_out), 1)) - - -class Bottleneck(nn.Module): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class BottleneckCSP(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) - self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) - self.act = nn.LeakyReLU(0.1, inplace=True) - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1)))) - - -class C3(nn.Module): - # CSP Bottleneck with 3 convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1) # act=FReLU(c2) - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1)) - - -class ShuffleV2Block(nn.Module): - def __init__(self, inp, oup, stride): - super().__init__() - - if not 1 <= stride <= 3: - raise ValueError("illegal stride value") - self.stride = stride - - branch_features = oup // 2 - - if self.stride > 1: - self.branch1 = nn.Sequential( - self.depthwise_conv(inp, inp, kernel_size=3, stride=self.stride, padding=1), - nn.BatchNorm2d(inp), - nn.Conv2d(inp, branch_features, kernel_size=1, stride=1, padding=0, bias=False), - nn.BatchNorm2d(branch_features), - nn.SiLU(), - ) - else: - self.branch1 = nn.Sequential() - - self.branch2 = nn.Sequential( - nn.Conv2d( - inp if (self.stride > 1) else branch_features, - branch_features, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ), - nn.BatchNorm2d(branch_features), - nn.SiLU(), - self.depthwise_conv(branch_features, branch_features, kernel_size=3, stride=self.stride, padding=1), - nn.BatchNorm2d(branch_features), - nn.Conv2d(branch_features, branch_features, kernel_size=1, stride=1, padding=0, bias=False), - nn.BatchNorm2d(branch_features), - nn.SiLU(), - ) - - @staticmethod - def depthwise_conv(i, o, kernel_size, stride=1, padding=0, bias=False): - return nn.Conv2d(i, o, kernel_size, stride, padding, bias=bias, groups=i) - - def forward(self, x): - if self.stride == 1: - x1, x2 = x.chunk(2, dim=1) - out = torch.cat((x1, self.branch2(x2)), dim=1) - else: - out = torch.cat((self.branch1(x), self.branch2(x)), dim=1) - out = channel_shuffle(out, 2) - return out - - -class SPP(nn.Module): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, k=(5, 9, 13)): - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) - - -class Concat(nn.Module): - # Concatenate a list of tensors along dimension - def __init__(self, dimension=1): - super().__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class NMS(nn.Module): - # Non-Maximum Suppression (NMS) module - conf = 0.25 # confidence threshold - iou = 0.45 # IoU threshold - classes = None # (optional list) filter by class - - def forward(self, x): - return non_max_suppression(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) - - -class AutoShape(nn.Module): - # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - img_size = 640 # inference size (pixels) - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - classes = None # (optional list) filter by class - - def __init__(self, model): - super().__init__() - self.model = model.eval() - - def autoshape(self): - print("autoShape already enabled, skipping... ") # model already converted to model.autoshape() - return self - - def forward(self, imgs, size=640, augment=False, profile=False): - # Inference from various sources. For height=720, width=1280, RGB images example inputs are: - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(720,1280,3) - # PIL: = Image.open('image.jpg') # HWC x(720,1280,3) - # numpy: = np.zeros((720,1280,3)) # HWC - # torch: = torch.zeros(16,3,720,1280) # BCHW - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - p = next(self.model.parameters()) # for device and type - if isinstance(imgs, torch.Tensor): # torch - return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference - - # Pre-process - n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images - shape0, shape1 = [], [] # image and inference shapes - for i, im in enumerate(imgs): - im = np.array(im) # to numpy - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[:, :, :3] if im.ndim == 3 else np.tile(im[:, :, None], 3) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = size / max(s) # gain - shape1.append([y * g for y in s]) - imgs[i] = im # update - shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape - x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad - x = np.stack(x, 0) if n > 1 else x[0][None] # stack - x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255.0 # uint8 to fp16/32 - - # Inference - with torch.no_grad(): - y = self.model(x, augment, profile)[0] # forward - y = non_max_suppression(y, conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) # NMS - - # Post-process - for i in range(n): - scale_coords(shape1, y[i][:, :4], shape0[i]) - - return Detections(imgs, y, self.names) - - -class Detections: - # detections class for YOLOv5 inference results - def __init__(self, imgs, pred, names=None): - super().__init__() - d = pred[0].device # device - gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1.0, 1.0], device=d) for im in imgs] # normalizations - self.imgs = imgs # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) - - def __len__(self): - return self.n - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - x = [Detections([self.imgs[i]], [self.pred[i]], self.names) for i in range(self.n)] - for d in x: - for k in ["imgs", "pred", "xyxy", "xyxyn", "xywh", "xywhn"]: - setattr(d, k, getattr(d, k)[0]) # pop out of list - return x diff --git a/spaces/leogabraneth/text-generation-webui-main/modules/llamacpp_model.py b/spaces/leogabraneth/text-generation-webui-main/modules/llamacpp_model.py deleted file mode 100644 index 25d171b14b03e148c5612dd5043ec89af3da18e1..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/modules/llamacpp_model.py +++ /dev/null @@ -1,176 +0,0 @@ -import re -from functools import partial - -import numpy as np -import torch - -from modules import RoPE, shared -from modules.callbacks import Iteratorize -from modules.logging_colors import logger -from modules.text_generation import get_max_prompt_length - -try: - import llama_cpp -except: - llama_cpp = None - -try: - import llama_cpp_cuda -except: - llama_cpp_cuda = None - - -def llama_cpp_lib(): - if (shared.args.cpu and llama_cpp is not None) or llama_cpp_cuda is None: - return llama_cpp - else: - return llama_cpp_cuda - - -def ban_eos_logits_processor(eos_token, input_ids, logits): - logits[eos_token] = -float('inf') - return logits - - -def custom_token_ban_logits_processor(token_ids, input_ids, logits): - for token_id in token_ids: - logits[token_id] = -float('inf') - - return logits - - -class LlamaCppModel: - def __init__(self): - self.initialized = False - self.grammar_string = '' - self.grammar = None - - def __del__(self): - self.model.__del__() - - @classmethod - def from_pretrained(self, path): - - Llama = llama_cpp_lib().Llama - LlamaCache = llama_cpp_lib().LlamaCache - - result = self() - cache_capacity = 0 - if shared.args.cache_capacity is not None: - if 'GiB' in shared.args.cache_capacity: - cache_capacity = int(re.sub('[a-zA-Z]', '', shared.args.cache_capacity)) * 1000 * 1000 * 1000 - elif 'MiB' in shared.args.cache_capacity: - cache_capacity = int(re.sub('[a-zA-Z]', '', shared.args.cache_capacity)) * 1000 * 1000 - else: - cache_capacity = int(shared.args.cache_capacity) - - logger.info("Cache capacity is " + str(cache_capacity) + " bytes") - - if shared.args.tensor_split is None or shared.args.tensor_split.strip() == '': - tensor_split_list = None - else: - tensor_split_list = [float(x) for x in shared.args.tensor_split.strip().split(",")] - - params = { - 'model_path': str(path), - 'n_ctx': shared.args.n_ctx, - 'seed': int(shared.args.llama_cpp_seed), - 'n_threads': shared.args.threads or None, - 'n_threads_batch': shared.args.threads_batch or None, - 'n_batch': shared.args.n_batch, - 'use_mmap': not shared.args.no_mmap, - 'use_mlock': shared.args.mlock, - 'mul_mat_q': not shared.args.no_mul_mat_q, - 'numa': shared.args.numa, - 'n_gpu_layers': shared.args.n_gpu_layers, - 'rope_freq_base': RoPE.get_rope_freq_base(shared.args.alpha_value, shared.args.rope_freq_base), - 'tensor_split': tensor_split_list, - 'rope_freq_scale': 1.0 / shared.args.compress_pos_emb, - } - - result.model = Llama(**params) - if cache_capacity > 0: - result.model.set_cache(LlamaCache(capacity_bytes=cache_capacity)) - - # This is ugly, but the model and the tokenizer are the same object in this library. - return result, result - - def encode(self, string): - if type(string) is str: - string = string.encode() - - return self.model.tokenize(string) - - def decode(self, ids): - return self.model.detokenize(ids).decode('utf-8') - - def get_logits(self, tokens): - self.model.eval(tokens) - logits = self.model._scores - logits = np.expand_dims(logits, 0) # batch dim is expected - return torch.tensor(logits, dtype=torch.float32) - - def load_grammar(self, string): - if string != self.grammar_string: - self.grammar_string = string - if string.strip() != '': - self.grammar = llama_cpp_lib().LlamaGrammar.from_string(string) - else: - self.grammar = None - - def generate(self, prompt, state, callback=None): - - LogitsProcessorList = llama_cpp_lib().LogitsProcessorList - - prompt = prompt if type(prompt) is str else prompt.decode() - - # Handle truncation - prompt = self.encode(prompt) - prompt = prompt[-get_max_prompt_length(state):] - prompt = self.decode(prompt) - - self.load_grammar(state['grammar_string']) - logit_processors = LogitsProcessorList() - if state['ban_eos_token']: - logit_processors.append(partial(ban_eos_logits_processor, self.model.token_eos())) - - if state['custom_token_bans']: - to_ban = [int(x) for x in state['custom_token_bans'].split(',')] - if len(to_ban) > 0: - logit_processors.append(partial(custom_token_ban_logits_processor, to_ban)) - - completion_chunks = self.model.create_completion( - prompt=prompt, - max_tokens=state['max_new_tokens'], - temperature=state['temperature'], - top_p=state['top_p'], - top_k=state['top_k'], - repeat_penalty=state['repetition_penalty'], - presence_penalty=state['presence_penalty'], - frequency_penalty=state['frequency_penalty'], - tfs_z=state['tfs'], - mirostat_mode=int(state['mirostat_mode']), - mirostat_tau=state['mirostat_tau'], - mirostat_eta=state['mirostat_eta'], - stream=True, - logits_processor=logit_processors, - grammar=self.grammar - ) - - output = "" - for completion_chunk in completion_chunks: - if shared.stop_everything: - break - text = completion_chunk['choices'][0]['text'] - output += text - if callback: - callback(text) - - return output - - def generate_with_streaming(self, *args, **kwargs): - with Iteratorize(self.generate, args, kwargs, callback=None) as generator: - reply = '' - for token in generator: - reply += token - yield reply diff --git a/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/entrypoint.sh b/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/entrypoint.sh deleted file mode 100644 index 74d077f7addea41ad1695b98612f15abe5d73939..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/entrypoint.sh +++ /dev/null @@ -1,19 +0,0 @@ -#!/usr/bin/env bash -set -Eeuo pipefail - -# Ensure that the application state path is set -if [ -z $APPLICATION_STATE_PATH ]; then - echo "Must specify application state path" - exit 1 -fi - -# Ensure that bind-mounted directories are owned by the user that runs the service if the user is not root -if [ $RUN_UID -ne 0 ]; then - chown -R $RUN_UID:$RUN_UID $APPLICATION_STATE_PATH -fi - -# Run service as specified (non-root) user -exec runuser -u $(id -un $RUN_UID) -- python3 /app/webui/app.py \ - -d $CONTAINER_MODEL_PATH \ - --sessions_dir $CONTAINER_SESSIONS_PATH \ - $@ diff --git a/spaces/lightli/bingo-newbing/src/lib/utils.ts b/spaces/lightli/bingo-newbing/src/lib/utils.ts deleted file mode 100644 index 07feedb34e356b1b3cf867872f32d47a96ae12fb..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/lib/utils.ts +++ /dev/null @@ -1,138 +0,0 @@ -import { clsx, type ClassValue } from 'clsx' -import { customAlphabet } from 'nanoid' -import { twMerge } from 'tailwind-merge' - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} - -export const nanoid = customAlphabet( - '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', - 7 -) // 7-character random string - -export function createChunkDecoder() { - const decoder = new TextDecoder() - return function (chunk: Uint8Array | undefined): string { - if (!chunk) return '' - return decoder.decode(chunk, { stream: true }) - } -} - -export function random (start: number, end: number) { - return start + Math.ceil(Math.random() * (end - start)) -} - -export function randomIP() { - return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}` -} - -export function parseHeadersFromCurl(content: string) { - const re = /-H '([^:]+):\s*([^']+)/mg - const headers: HeadersInit = {} - content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl - content.replace(re, (_: string, key: string, value: string) => { - headers[key] = value - return '' - }) - - return headers -} - -export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2'] -export function encodeHeadersToCookie(content: string) { - const base64Content = btoa(content) - const contentChunks = base64Content.match(/.{1,4000}/g) || [] - return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`) -} - -export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) { - let base64Content = '' - ChunkKeys.forEach((key) => { - base64Content += (cookies[key] || '') - }) - try { - return atob(base64Content) - } catch(e) { - return '' - } -} - -export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) { - return parseHeadersFromCurl(extraCurlFromCookie(cookies)) -} - -export function formatDate(input: string | number | Date): string { - const date = new Date(input) - return date.toLocaleDateString('en-US', { - month: 'long', - day: 'numeric', - year: 'numeric' - }) -} - -export function parseCookie(cookie: string, cookieName: string) { - const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie - return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : '' -} - -export function parseCookies(cookie: string, cookieNames: string[]) { - const cookies: { [key: string]: string } = {} - cookieNames.forEach(cookieName => { - cookies[cookieName] = parseCookie(cookie, cookieName) - }) - return cookies -} - -export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0' -export const DEFAULT_IP = process.env.BING_IP || randomIP() - -export function parseUA(ua?: string, default_ua = DEFAULT_UA) { - return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua -} - -export function createHeaders(cookies: Partial<{ [key: string]: string }>, defaultHeaders?: Partial<{ [key: string]: string }>) { - let { - BING_COOKIE = process.env.BING_COOKIE, - BING_UA = process.env.BING_UA, - BING_IP = process.env.BING_IP, - BING_HEADER = process.env.BING_HEADER, - } = cookies - - if (BING_HEADER) { - return extraHeadersFromCookie({ - BING_HEADER, - ...cookies, - }) - } - - const ua = parseUA(BING_UA) - - if (!BING_COOKIE) { - BING_COOKIE = defaultHeaders?.IMAGE_BING_COOKIE || 'xxx' // hf 暂时不用 Cookie 也可以正常使用 - } - - const parsedCookie = parseCookie(BING_COOKIE, '_U') - if (!parsedCookie) { - throw new Error('Invalid Cookie') - } - return { - 'x-forwarded-for': BING_IP || DEFAULT_IP, - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6', - 'User-Agent': ua!, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: `_U=${parsedCookie}` || '', - } -} - -export class WatchDog { - private tid = 0 - watch(fn: Function, timeout = 2000) { - clearTimeout(this.tid) - this.tid = setTimeout(fn, timeout + Math.random() * 1000) - } - reset() { - clearTimeout(this.tid) - } -} diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Biohazard4moviefiledownload.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Biohazard4moviefiledownload.md deleted file mode 100644 index 59f89abad2350c2a17dc0c8304dd0b681a16b417..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Biohazard4moviefiledownload.md +++ /dev/null @@ -1,6 +0,0 @@ -

    biohazard4moviefiledownload


    Download ———>>> https://bytlly.com/2uGx2w



    -
    -HD Online Player (biohazard4moviefiledownload) · Microsoft Office 2019 Pro Plus Retail Torrent · railworks 3 train simulator 2012 deluxe crack only blogspot 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Jorge Cardoso Milonga Pdf 13.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Jorge Cardoso Milonga Pdf 13.md deleted file mode 100644 index 82f408ae0226d1f47b203aab9911d324910080b9..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Jorge Cardoso Milonga Pdf 13.md +++ /dev/null @@ -1,8 +0,0 @@ -
    -

    jorge cardoso became a first-class spanish guitarist. he has earned a doctorate and is a professor of the national university of cordoba, argentina. he has been a visiting professor at universities in europe and the united states, including oxford, yale and boston. he has performed in europe, america, asia and africa, and has participated in numerous festivals, conferences, seminars and concerts in many countries.

    -

    jorge cardoso writes music for guitar in duo format (two guitars), usually played by two different guitarists. many of his pieces are recorded on cds and dvds and in many theaters. he has participated in the puente (bridge) of the national orchestra of spain.

    -

    jorge cardoso milonga pdf 13


    Download File 🆓 https://bytlly.com/2uGyvD



    -

    jorge cardoso is an excellent guitarist and can play classical guitar from memory. he is a virtuoso and has earned a doctorate. his repertoire covers a wide range of musical styles and forms, including classical guitar, flamenco, tango, latin-american music, jazz, blues, rock, and more. he has earned a doctorate, and is a professor at the royal conservatory of music, madrid, spain. he is well known for his musical versatility and his professional mastery of the guitar. he is an excellent guitarist and can play classical guitar from memory.

    -

    jorge cardoso was born in corrientes, argentina in 1957. he is a native of the south american nation. jorge cardoso studied at the national university of cordoba, argentina, and was a pupil of josé antonio abreu. he has written many compositions and played in many different places throughout his career. jorge cardoso has a teacher's degree in music.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Materi Al Islam Dan Kemuhammadiyahan.pdf.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Materi Al Islam Dan Kemuhammadiyahan.pdf.md deleted file mode 100644 index 13e1e75d40ff161b8ae78085c5046ec56fb436a7..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Materi Al Islam Dan Kemuhammadiyahan.pdf.md +++ /dev/null @@ -1,8 +0,0 @@ -

    Materi Al Islam Dan Kemuhammadiyahan.pdf


    Download Zip === https://bytlly.com/2uGyeY



    - -. Recovery Toolbox Excel Crack Materi Al Islamic Dan Kemuhammadiyahan Pdf Download samsung tab usb driver ce0168 Comprodtv 4 download windows 10. In this tutorial we will learn how to use the Recovery Toolbox Excel software to back up your Excel.Detailed information about recovery tools available in the Windows Store. -About Recovery Toolbox Excel Recovery Toolbox Excel is an extremely simple and reliable tool that provides the ability to extract deleted or unoversed files from Windows data recovery CD or ISO images. -We hope that our tutorial will be helpful in your recovery of deleted files. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/llmonitor/benchmarks/app/prompts/[slug]/page.js b/spaces/llmonitor/benchmarks/app/prompts/[slug]/page.js deleted file mode 100644 index 8dbbe324ca6a19f0ea5a8f1fcbf4383d65fb257d..0000000000000000000000000000000000000000 --- a/spaces/llmonitor/benchmarks/app/prompts/[slug]/page.js +++ /dev/null @@ -1,76 +0,0 @@ -import Link from "next/link" -import db from "@/utils/db" - -export default async function PromptDetails({ params }) { - const { slug } = params - - const [prompt] = await db`SELECT * FROM prompts WHERE slug = ${slug}` - - // get results with their model (join) - const results = - await db`SELECT * FROM results INNER JOIN models ON results.model = models.id WHERE prompt = ${prompt.id} ORDER BY models.name ASC` - - console.log("results", results) - - const rubrics = await db`SELECT * FROM rubrics WHERE prompt = ${prompt.id}` - - return ( - <> -

    Prompt asked:

    -
    -
    {prompt.text}
    -
    - {prompt.note &&

    Note: {prompt.note}

    } -
    - - - - - - - - - - - - - {results.map((result, i) => ( - - - - - - - - ))} - -
    ModelAnswerLatencyRateScore
    - - {result.name} - - -
    {result.result.trim().substring(0, 1000)}
    -
    {parseInt(result.duration)}ms{result.rate.toFixed(2)} - {typeof result.score === "number" ? result.score : "not rated"} -
    -
    -
    -        

    This prompt is automatically graded using these rubrics:

    -
      - {rubrics - .sort((a, b) => a.grading - b.grading) - .map((rubric, i) => ( -
    • - the answer {rubric.grading} ({rubric.points} points) -
    • - ))} -
    -
    - - ) -} diff --git a/spaces/luisoala/glide-test/setup.py b/spaces/luisoala/glide-test/setup.py deleted file mode 100644 index 77497d14113b89a073c514774741ee8c94909d85..0000000000000000000000000000000000000000 --- a/spaces/luisoala/glide-test/setup.py +++ /dev/null @@ -1,14 +0,0 @@ -from setuptools import setup -setup( - name="glide-text2im", - packages=["glide_text2im"], - install_requires=[ - "Pillow", - "attrs", - "torch", - "filelock", - "requests", - "tqdm", - ], - author="OpenAI", -) diff --git a/spaces/mascIT/AgeGuesser/yolov5/utils/datasets.c b/spaces/mascIT/AgeGuesser/yolov5/utils/datasets.c deleted file mode 100644 index 645db3ad9a5e18c9ffeabfe067901e74d6acc49c..0000000000000000000000000000000000000000 --- a/spaces/mascIT/AgeGuesser/yolov5/utils/datasets.c +++ /dev/null @@ -1,27818 +0,0 @@ -/* Generated by Cython 3.0.0a10 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "pdf_toolbox.lib.dia_yolov5.utils.datasets", - "sources": [ - "pdf_toolbox\\lib\\dia_yolov5\\utils\\datasets.py" - ] - }, - "module_name": "pdf_toolbox.lib.dia_yolov5.utils.datasets" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#if defined(CYTHON_LIMITED_API) && 0 - #ifndef Py_LIMITED_API - #if CYTHON_LIMITED_API+0 > 0x03030000 - #define Py_LIMITED_API CYTHON_LIMITED_API - #else - #define Py_LIMITED_API 0x03030000 - #endif - #endif -#endif - -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02070000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.7+ or Python 3.3+. -#else -#define CYTHON_ABI "3_0_0a10" -#define __PYX_ABI_MODULE_NAME "_cython_" CYTHON_ABI -#define __PYX_TYPE_MODULE_PREFIX __PYX_ABI_MODULE_NAME "." -#define CYTHON_HEX_VERSION 0x030000AA -#define CYTHON_FUTURE_DIVISION 1 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(_WIN32) && !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #define HAVE_LONG_LONG -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#if defined(GRAALVM_PYTHON) - /* For very preliminary testing purposes. Most variables are set the same as PyPy. - The existence of this section does not imply that anything works or is even tested */ - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 1 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#elif defined(PYPY_VERSION) - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#elif defined(CYTHON_LIMITED_API) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 1 - #define CYTHON_COMPILING_IN_GRAAL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 1 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #endif - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 1 - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #ifndef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #ifndef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL (PY_MAJOR_VERSION < 3 || PY_VERSION_HEX >= 0x03060000) - #endif - #ifndef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL (PY_VERSION_HEX >= 0x030700A1) - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #endif - #if PY_VERSION_HEX < 0x030400a1 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #elif !defined(CYTHON_USE_TP_FINALIZE) - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #if PY_VERSION_HEX < 0x030600B1 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #elif !defined(CYTHON_USE_DICT_VERSIONS) - #define CYTHON_USE_DICT_VERSIONS 1 - #endif - #if PY_VERSION_HEX < 0x030700A3 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if !defined(CYTHON_VECTORCALL) -#define CYTHON_VECTORCALL (CYTHON_FAST_PYCCALL && PY_VERSION_HEX >= 0x030800B1) -#endif -#define CYTHON_BACKPORT_VECTORCALL (CYTHON_METH_FASTCALL && PY_VERSION_HEX < 0x030800B1) -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR - #define CYTHON_MAYBE_UNUSED_VAR(x) CYTHON_UNUSED_VAR(x) -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned short uint16_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - #endif - #endif - #if _MSC_VER < 1300 - #ifdef _WIN64 - typedef unsigned long long __pyx_uintptr_t; - #else - typedef unsigned int __pyx_uintptr_t; - #endif - #else - #ifdef _WIN64 - typedef unsigned __int64 __pyx_uintptr_t; - #else - typedef unsigned __int32 __pyx_uintptr_t; - #endif - #endif -#else - #include - typedef uintptr_t __pyx_uintptr_t; -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define Py_OptimizeFlag 0 -#endif -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_DefaultClassType PyClass_Type - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if PY_VERSION_HEX >= 0x030B00A1 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL; - PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL; - const char *fn_cstr=NULL; - const char *name_cstr=NULL; - PyCodeObject* co=NULL; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - if (!(kwds=PyDict_New())) goto end; - if (!(argcount=PyLong_FromLong(a))) goto end; - if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end; - if (!(posonlyargcount=PyLong_FromLong(p))) goto end; - if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end; - if (!(kwonlyargcount=PyLong_FromLong(k))) goto end; - if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end; - if (!(nlocals=PyLong_FromLong(l))) goto end; - if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end; - if (!(stacksize=PyLong_FromLong(s))) goto end; - if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end; - if (!(flags=PyLong_FromLong(f))) goto end; - if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end; - if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end; - if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end; - if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end; - if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too; - if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here - if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too; - Py_XDECREF((PyObject*)co); - co = (PyCodeObject*)call_result; - call_result = NULL; - if (0) { - cleanup_code_too: - Py_XDECREF((PyObject*)co); - co = NULL; - } - end: - Py_XDECREF(kwds); - Py_XDECREF(argcount); - Py_XDECREF(posonlyargcount); - Py_XDECREF(kwonlyargcount); - Py_XDECREF(nlocals); - Py_XDECREF(stacksize); - Py_XDECREF(replace); - Py_XDECREF(call_result); - Py_XDECREF(empty); - if (type) { - PyErr_Restore(type, value, traceback); - } - return co; - } -#elif PY_VERSION_HEX >= 0x030800B2 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_NewWithPosOnlyArgs(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif -#endif -#if PY_VERSION_HEX >= 0x030900A4 || defined(Py_IS_TYPE) - #define __Pyx_IS_TYPE(ob, type) Py_IS_TYPE(ob, type) -#else - #define __Pyx_IS_TYPE(ob, type) (((const PyObject*)ob)->ob_type == (type)) -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_METH_FASTCALL - #define __Pyx_METH_FASTCALL METH_FASTCALL - #define __Pyx_PyCFunction_FastCall __Pyx_PyCFunctionFast - #define __Pyx_PyCFunction_FastCallWithKeywords __Pyx_PyCFunctionFastWithKeywords -#else - #define __Pyx_METH_FASTCALL METH_VARARGS - #define __Pyx_PyCFunction_FastCall PyCFunction - #define __Pyx_PyCFunction_FastCallWithKeywords PyCFunctionWithKeywords -#endif -#if CYTHON_VECTORCALL - #define __pyx_vectorcallfunc vectorcallfunc - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET PY_VECTORCALL_ARGUMENTS_OFFSET - #define __Pyx_PyVectorcall_NARGS(n) PyVectorcall_NARGS((size_t)(n)) -#elif CYTHON_BACKPORT_VECTORCALL - typedef PyObject *(*__pyx_vectorcallfunc)(PyObject *callable, PyObject *const *args, - size_t nargsf, PyObject *kwnames); - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET ((size_t)1 << (8 * sizeof(size_t) - 1)) - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(((size_t)(n)) & ~__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)) -#else - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET 0 - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(n)) -#endif -#if PY_VERSION_HEX < 0x030900B1 - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) ((void)m, PyType_FromSpecWithBases(s, b)) - typedef PyObject *(*__Pyx_PyCMethod)(PyObject *, PyTypeObject *, PyObject *const *, size_t, PyObject *); -#else - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) PyType_FromModuleAndSpec(m, s, b) - #define __Pyx_PyCMethod PyCMethod -#endif -#ifndef METH_METHOD - #define METH_METHOD 0x200 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyThreadState_Current PyThreadState_Get() -#elif !CYTHON_FAST_THREAD_STATE - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE void *__Pyx_PyModule_GetState(PyObject *op) -{ - void *result; - result = PyModule_GetState(op); - if (!result) - Py_FatalError("Couldn't find the module state"); - return result; -} -#endif -#define __Pyx_PyObject_GetSlot(obj, name, func_ctype) __Pyx_PyType_GetSlot(Py_TYPE(obj), name, func_ctype) -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((func_ctype) PyType_GetSlot((type), Py_##name)) -#else - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((type)->name) -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if PY_MAJOR_VERSION < 3 - #if CYTHON_COMPILING_IN_PYPY - #if PYPY_VERSION_NUM < 0x07030600 - #if defined(__cplusplus) && __cplusplus >= 201402L - [[deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")]] - #elif defined(__GNUC__) || defined(__clang__) - __attribute__ ((__deprecated__("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6"))) - #elif defined(_MSC_VER) - __declspec(deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")) - #endif - static CYTHON_INLINE int PyGILState_Check(void) { - return 0; - } - #else // PYPY_VERSION_NUM < 0x07030600 - #endif // PYPY_VERSION_NUM < 0x07030600 - #else - static CYTHON_INLINE int PyGILState_Check(void) { - PyThreadState * tstate = _PyThreadState_Current; - return tstate && (tstate == PyGILState_GetThisThreadState()); - } - #endif -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX > 0x030600B4 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStrWithError(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStr(PyObject *dict, PyObject *name) { - PyObject *res = __Pyx_PyDict_GetItemStrWithError(dict, name); - if (res == NULL) PyErr_Clear(); - return res; -} -#elif PY_MAJOR_VERSION >= 3 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07020000) -#define __Pyx_PyDict_GetItemStrWithError PyDict_GetItemWithError -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#else -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStrWithError(PyObject *dict, PyObject *name) { -#if CYTHON_COMPILING_IN_PYPY - return PyDict_GetItem(dict, name); -#else - PyDictEntry *ep; - PyDictObject *mp = (PyDictObject*) dict; - long hash = ((PyStringObject *) name)->ob_shash; - assert(hash != -1); - ep = (mp->ma_lookup)(mp, name, hash); - if (ep == NULL) { - return NULL; - } - return ep->me_value; -#endif -} -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#endif -#if CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyType_GetFlags(tp) (((PyTypeObject *)tp)->tp_flags) - #define __Pyx_PyType_HasFeature(type, feature) ((__Pyx_PyType_GetFlags(type) & (feature)) != 0) - #define __Pyx_PyObject_GetIterNextFunc(obj) (Py_TYPE(obj)->tp_iternext) -#else - #define __Pyx_PyType_GetFlags(tp) (PyType_GetFlags((PyTypeObject *)tp)) - #define __Pyx_PyType_HasFeature(type, feature) PyType_HasFeature(type, feature) - #define __Pyx_PyObject_GetIterNextFunc(obj) PyIter_Next -#endif -#if CYTHON_USE_TYPE_SPECS && PY_VERSION_HEX >= 0x03080000 -#define __Pyx_PyHeapTypeObject_GC_Del(obj) {\ - PyTypeObject *type = Py_TYPE(obj);\ - assert(__Pyx_PyType_HasFeature(type, Py_TPFLAGS_HEAPTYPE));\ - PyObject_GC_Del(obj);\ - Py_DECREF(type);\ -} -#else -#define __Pyx_PyHeapTypeObject_GC_Del(obj) PyObject_GC_Del(obj) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GetLength(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_ReadChar(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((void)u, 1114111) - #define __Pyx_PyUnicode_KIND(u) ((void)u, (0)) - #define __Pyx_PyUnicode_DATA(u) ((void*)u) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)k, PyUnicode_ReadChar((PyObject*)(d), i)) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GetLength(u)) -#elif PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if defined(PyUnicode_IS_READY) - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #else - #define __Pyx_PyUnicode_READY(op) (0) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) ((int)PyUnicode_KIND(u)) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE) - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) ((int)sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #if !defined(PyUnicode_DecodeUnicodeEscape) - #define PyUnicode_DecodeUnicodeEscape(s, size, errors) PyUnicode_Decode(s, size, "unicode_escape", errors) - #endif - #if !defined(PyUnicode_Contains) || (PY_MAJOR_VERSION == 2 && PYPY_VERSION_NUM < 0x07030500) - #undef PyUnicode_Contains - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) - #endif - #if !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) - #endif - #if !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) - #endif -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#if CYTHON_COMPILING_IN_CPYTHON - #define __Pyx_PySequence_ListKeepNew(obj)\ - (likely(PyList_CheckExact(obj) && Py_REFCNT(obj) == 1) ? __Pyx_NewRef(obj) : PySequence_List(obj)) -#else - #define __Pyx_PySequence_ListKeepNew(obj) PySequence_List(obj) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) __Pyx_IS_TYPE(obj, &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #define _USE_MATH_DEFINES -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__pdf_toolbox__lib__dia_yolov5__utils__datasets -#define __PYX_HAVE_API__pdf_toolbox__lib__dia_yolov5__utils__datasets -/* Early includes */ -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const wchar_t *u) -{ - const wchar_t *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#else -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) -{ - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#endif -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -#endif -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm = __FILE__; -static const char *__pyx_filename; - -/* #### Code section: filename_table ### */ - -static const char *__pyx_f[] = { - "pdf_toolbox\\\\lib\\\\dia_yolov5\\\\utils\\\\datasets.py", -}; -/* #### Code section: utility_code_proto_before_types ### */ -/* #### Code section: numeric_typedefs ### */ -/* #### Code section: complex_type_declarations ### */ -/* #### Code section: type_declarations ### */ - -/*--- Type declarations ---*/ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit; -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr; - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":42 - * - * - * def get_hash(paths): # <<<<<<<<<<<<<< - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash { - PyObject_HEAD - PyObject *__pyx_v_paths; -}; - - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":44 - * def get_hash(paths): - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes # <<<<<<<<<<<<<< - * h = hashlib.md5(str(size).encode()) # hash sizes - * h.update(''.join(paths).encode()) # hash paths - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *__pyx_outer_scope; - PyObject *__pyx_v_p; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":200 - * - * - * def load_mosaic(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - * labels4, segments4 = [], [] - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic { - PyObject_HEAD - PyObject *__pyx_v_s; - PyObject *__pyx_v_self; -}; - - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":204 - * labels4, segments4 = [], [] - * s = self.img_size - * yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y # <<<<<<<<<<<<<< - * indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - * random.shuffle(indices) - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *__pyx_outer_scope; - PyObject *__pyx_v_x; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":247 - * - * - * def load_mosaic9(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - * labels9, segments9 = [], [] - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 { - PyObject_HEAD - PyObject *__pyx_v_c; - PyObject *__pyx_v_s; - PyObject *__pyx_v_self; -}; - - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":280 - * - * padx, pady = c[:2] - * x1, y1, x2, y2 = (max(x, 0) for x in c) # allocate coords # <<<<<<<<<<<<<< - * - * # Labels - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *__pyx_outer_scope; - PyObject *__pyx_v_x; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":295 - * - * # Offset - * yc, xc = (int(random.uniform(0, s)) for _ in self.mosaic_border) # mosaic center x, y # <<<<<<<<<<<<<< - * img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] - * - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *__pyx_outer_scope; - PyObject *__pyx_v__; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":362 - * - * - * def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False): # <<<<<<<<<<<<<< - * """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - * Usage: from utils.datasets import *; autosplit() - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit { - PyObject_HEAD - PyObject *__pyx_v_path; -}; - - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":371 - * """ - * path = Path(path) # images dir - * files = sorted(x for x in path.rglob('*.*') if x.suffix[1:].lower() in IMG_FORMATS) # image files only # <<<<<<<<<<<<<< - * n = len(files) # number of files - * random.seed(0) # for reproducibility - */ -struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr { - PyObject_HEAD - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *__pyx_outer_scope; - PyObject *__pyx_v_x; -}; - -/* #### Code section: utility_code_proto ### */ - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, Py_ssize_t); - void (*DECREF)(void*, PyObject*, Py_ssize_t); - void (*GOTREF)(void*, PyObject*, Py_ssize_t); - void (*GIVEREF)(void*, PyObject*, Py_ssize_t); - void* (*SetupContext)(const char*, Py_ssize_t, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - } - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__)) - #define __Pyx_RefNannyFinishContextNogil() __Pyx_RefNannyFinishContext() -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_XINCREF(r) do { if((r) == NULL); else {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) == NULL); else {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) == NULL); else {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) == NULL); else {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContextNogil() - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_Py_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; Py_XDECREF(tmp);\ - } while (0) -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* TupleAndListFromArray.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n); -static CYTHON_INLINE PyObject* __Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n); -#endif - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* fastcall.proto */ -#define __Pyx_Arg_VARARGS(args, i) PyTuple_GET_ITEM(args, i) -#define __Pyx_NumKwargs_VARARGS(kwds) PyDict_Size(kwds) -#define __Pyx_KwValues_VARARGS(args, nargs) NULL -#define __Pyx_GetKwValue_VARARGS(kw, kwvalues, s) __Pyx_PyDict_GetItemStrWithError(kw, s) -#define __Pyx_KwargsAsDict_VARARGS(kw, kwvalues) PyDict_Copy(kw) -#if CYTHON_METH_FASTCALL - #define __Pyx_Arg_FASTCALL(args, i) args[i] - #define __Pyx_NumKwargs_FASTCALL(kwds) PyTuple_GET_SIZE(kwds) - #define __Pyx_KwValues_FASTCALL(args, nargs) (&args[nargs]) - static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s); - #define __Pyx_KwargsAsDict_FASTCALL(kw, kwvalues) _PyStack_AsDict(kwvalues, kw) -#else - #define __Pyx_Arg_FASTCALL __Pyx_Arg_VARARGS - #define __Pyx_NumKwargs_FASTCALL __Pyx_NumKwargs_VARARGS - #define __Pyx_KwValues_FASTCALL __Pyx_KwValues_VARARGS - #define __Pyx_GetKwValue_FASTCALL __Pyx_GetKwValue_VARARGS - #define __Pyx_KwargsAsDict_FASTCALL __Pyx_KwargsAsDict_VARARGS -#endif -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_VARARGS(args, start), stop - start) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_FASTCALL(args, start), stop - start) -#else -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) PyTuple_GetSlice(args, start, stop) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) PyTuple_GetSlice(args, start, stop) -#endif - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, - const char* function_name); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseClosureNameError.proto */ -static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -#define __Pyx_GetModuleGlobalNameUncached(var, name) {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#if !CYTHON_VECTORCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if !CYTHON_VECTORCALL - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif // !CYTHON_VECTORCALL -#endif - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectFastCall.proto */ -#define __Pyx_PyObject_FastCall(func, args, nargs) __Pyx_PyObject_FastCallDict(func, args, (size_t)(nargs), NULL) -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs); - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* pep479.proto */ -static void __Pyx_Generator_Replace_StopIteration(int in_async_gen); - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* PyObject_Str.proto */ -#define __Pyx_PyObject_Str(obj)\ - (likely(PyString_CheckExact(obj)) ? __Pyx_NewRef(obj) : PyObject_Str(obj)) - -/* DictGetItem.proto */ -#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY -static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key); -#define __Pyx_PyObject_Dict_GetItem(obj, name)\ - (likely(PyDict_CheckExact(obj)) ?\ - __Pyx_PyDict_GetItem(obj, name) : PyObject_GetItem(obj, name)) -#else -#define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key) -#define __Pyx_PyObject_Dict_GetItem(obj, name) PyObject_GetItem(obj, name) -#endif - -/* PyIntCompare.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, long intval, long inplace); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* dict_getitem_default.proto */ -static PyObject* __Pyx_PyDict_GetItemDefault(PyObject* d, PyObject* key, PyObject* default_value); - -/* UnpackUnboundCMethod.proto */ -typedef struct { - PyObject *type; - PyObject **method_name; - PyCFunction func; - PyObject *method; - int flag; -} __Pyx_CachedCFunction; - -/* CallUnboundCMethod1.proto */ -static PyObject* __Pyx__CallUnboundCMethod1(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg); -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_CallUnboundCMethod1(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg); -#else -#define __Pyx_CallUnboundCMethod1(cfunc, self, arg) __Pyx__CallUnboundCMethod1(cfunc, self, arg) -#endif - -/* CallUnboundCMethod2.proto */ -static PyObject* __Pyx__CallUnboundCMethod2(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg1, PyObject* arg2); -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030600B1 -static CYTHON_INLINE PyObject *__Pyx_CallUnboundCMethod2(__Pyx_CachedCFunction *cfunc, PyObject *self, PyObject *arg1, PyObject *arg2); -#else -#define __Pyx_CallUnboundCMethod2(cfunc, self, arg1, arg2) __Pyx__CallUnboundCMethod2(cfunc, self, arg1, arg2) -#endif - -/* DelItemInt.proto */ -#define __Pyx_DelItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_DelItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list assignment index out of range"), -1) :\ - __Pyx_DelItem_Generic(o, to_py_func(i)))) -static int __Pyx_DelItem_Generic(PyObject *o, PyObject *j); -static CYTHON_INLINE int __Pyx_DelItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound); - -/* PySequenceContains.proto */ -static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) { - int result = PySequence_Contains(seq, item); - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* PyObjectFormatSimple.proto */ -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - PyObject_Format(s, f)) -#elif PY_MAJOR_VERSION < 3 - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - likely(PyString_CheckExact(s)) ? PyUnicode_FromEncodedObject(s, NULL, "strict") :\ - PyObject_Format(s, f)) -#elif CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - likely(PyLong_CheckExact(s)) ? PyLong_Type.tp_repr(s) :\ - likely(PyFloat_CheckExact(s)) ? PyFloat_Type.tp_repr(s) :\ - PyObject_Format(s, f)) -#else - #define __Pyx_PyObject_FormatSimple(s, f) (\ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) :\ - PyObject_Format(s, f)) -#endif - -/* JoinPyUnicode.proto */ -static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - Py_UCS4 max_char); - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyObjectSetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o, n, NULL) -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value); -#else -#define __Pyx_PyObject_DelAttrStr(o,n) PyObject_DelAttr(o,n) -#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v) -#endif - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* IterFinish.proto */ -static CYTHON_INLINE int __Pyx_IterFinish(void); - -/* UnpackItemEndCheck.proto */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected); - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* SliceObject.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice( - PyObject* obj, Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** py_start, PyObject** py_stop, PyObject** py_slice, - int has_cstart, int has_cstop, int wraparound); - -/* PyIntCompare.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_NeObjC(PyObject *op1, PyObject *op2, long intval, long inplace); - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_MultiplyCObj(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_MultiplyCObj(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceMultiply(op1, op2) : PyNumber_Multiply(op1, op2)) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_MultiplyObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_MultiplyObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceMultiply(op1, op2) : PyNumber_Multiply(op1, op2)) -#endif - -/* RaiseUnboundLocalError.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectGetMethod.proto */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method); - -/* PyObjectCallMethod1.proto */ -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg); - -/* append.proto */ -static CYTHON_INLINE int __Pyx_PyObject_Append(PyObject* L, PyObject* x); - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* SliceTupleAndList.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyList_GetSlice(PyObject* src, Py_ssize_t start, Py_ssize_t stop); -static CYTHON_INLINE PyObject* __Pyx_PyTuple_GetSlice(PyObject* src, Py_ssize_t start, Py_ssize_t stop); -#else -#define __Pyx_PyList_GetSlice(seq, start, stop) PySequence_GetSlice(seq, start, stop) -#define __Pyx_PyTuple_GetSlice(seq, start, stop) PySequence_GetSlice(seq, start, stop) -#endif - -/* PyObjectLookupSpecial.proto */ -#if CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS -#define __Pyx_PyObject_LookupSpecialNoError(obj, attr_name) __Pyx__PyObject_LookupSpecial(obj, attr_name, 0) -#define __Pyx_PyObject_LookupSpecial(obj, attr_name) __Pyx__PyObject_LookupSpecial(obj, attr_name, 1) -static CYTHON_INLINE PyObject* __Pyx__PyObject_LookupSpecial(PyObject* obj, PyObject* attr_name, int with_error); -#else -#define __Pyx_PyObject_LookupSpecialNoError(o,n) __Pyx_PyObject_GetAttrStrNoError(o,n) -#define __Pyx_PyObject_LookupSpecial(o,n) __Pyx_PyObject_GetAttrStr(o,n) -#endif - -/* SliceObject.proto */ -#define __Pyx_PyObject_DelSlice(obj, cstart, cstop, py_start, py_stop, py_slice, has_cstart, has_cstop, wraparound)\ - __Pyx_PyObject_SetSlice(obj, (PyObject*)NULL, cstart, cstop, py_start, py_stop, py_slice, has_cstart, has_cstop, wraparound) -static CYTHON_INLINE int __Pyx_PyObject_SetSlice( - PyObject* obj, PyObject* value, Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** py_start, PyObject** py_stop, PyObject** py_slice, - int has_cstart, int has_cstop, int wraparound); - -/* IncludeStructmemberH.proto */ -#include - -/* FixUpExtensionType.proto */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type); -#endif - -/* PyObjectCallNoArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); - -/* PyObjectCallMethod0.proto */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name); - -/* ValidateBasesTuple.proto */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases); -#endif - -/* PyType_Ready.proto */ -static CYTHON_UNUSED int __Pyx_PyType_Ready(PyTypeObject *t); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportDottedModule.proto */ -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* RaiseNoneIterError.proto */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); - -/* UnpackTupleError.proto */ -static void __Pyx_UnpackTupleError(PyObject *, Py_ssize_t index); - -/* UnpackTuple2.proto */ -#define __Pyx_unpack_tuple2(tuple, value1, value2, is_tuple, has_known_size, decref_tuple)\ - (likely(is_tuple || PyTuple_Check(tuple)) ?\ - (likely(has_known_size || PyTuple_GET_SIZE(tuple) == 2) ?\ - __Pyx_unpack_tuple2_exact(tuple, value1, value2, decref_tuple) :\ - (__Pyx_UnpackTupleError(tuple, 2), -1)) :\ - __Pyx_unpack_tuple2_generic(tuple, value1, value2, has_known_size, decref_tuple)) -static CYTHON_INLINE int __Pyx_unpack_tuple2_exact( - PyObject* tuple, PyObject** value1, PyObject** value2, int decref_tuple); -static int __Pyx_unpack_tuple2_generic( - PyObject* tuple, PyObject** value1, PyObject** value2, int has_known_size, int decref_tuple); - -/* dict_iter.proto */ -static CYTHON_INLINE PyObject* __Pyx_dict_iterator(PyObject* dict, int is_dict, PyObject* method_name, - Py_ssize_t* p_orig_length, int* p_is_dict); -static CYTHON_INLINE int __Pyx_dict_iter_next(PyObject* dict_or_iter, Py_ssize_t orig_length, Py_ssize_t* ppos, - PyObject** pkey, PyObject** pvalue, PyObject** pitem, int is_dict); - -/* FetchCommonType.proto */ -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type); -#else -static PyTypeObject* __Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases); -#endif - -/* PyMethodNew.proto */ -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - return PyMethod_New(func, self); -} -#else - #define __Pyx_PyMethod_New PyMethod_New -#endif - -/* PyVectorcallFastCallDict.proto */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); -#endif - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CYFUNCTION_COROUTINE 0x08 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#if PY_VERSION_HEX < 0x030900B1 - #define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#else - #define __Pyx_CyFunction_GetClassObj(f)\ - ((PyObject*) ((PyCMethodObject *) (f))->mm_class) -#endif -#define __Pyx_CyFunction_SetClassObj(f, classobj)\ - __Pyx__CyFunction_SetClassObj((__pyx_CyFunctionObject *) (f), (classobj)) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { -#if PY_VERSION_HEX < 0x030900B1 - PyCFunctionObject func; -#else - PyCMethodObject func; -#endif -#if CYTHON_BACKPORT_VECTORCALL - __pyx_vectorcallfunc func_vectorcall; -#endif -#if PY_VERSION_HEX < 0x030500A0 - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; -#if PY_VERSION_HEX < 0x030900B1 - PyObject *func_classobj; -#endif - void *defaults; - int defaults_pyobjects; - size_t defaults_size; // used by FusedFunction for copying defaults - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; - PyObject *func_is_coroutine; -} __pyx_CyFunctionObject; -#if !CYTHON_USE_MODULE_STATE -static PyTypeObject *__pyx_CyFunctionType = 0; -#endif -#define __Pyx_CyFunction_Check(obj) __Pyx_TypeCheck(obj, __pyx_CyFunctionType) -#define __Pyx_IsCyOrPyCFunction(obj) __Pyx_TypeCheck2(obj, __pyx_CyFunctionType, &PyCFunction_Type) -#define __Pyx_CyFunction_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_CyFunctionType) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj); -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, - size_t size, - int pyobjects); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(PyObject *module); -#if CYTHON_METH_FASTCALL -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -#if CYTHON_BACKPORT_VECTORCALL -#define __Pyx_CyFunction_func_vectorcall(f) (((__pyx_CyFunctionObject*)f)->func_vectorcall) -#else -#define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) -#endif -#endif - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* SetNameInClass.proto */ -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? _PyDict_SetItem_KnownHash(ns, name, value, ((PyASCIIObject *) name)->hash) : PyObject_SetItem(ns, name, value)) -#elif CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? PyDict_SetItem(ns, name, value) : PyObject_SetItem(ns, name, value)) -#else -#define __Pyx_SetNameInClass(ns, name, value) PyObject_SetItem(ns, name, value) -#endif - -/* CalculateMetaclass.proto */ -static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases); - -/* Py3ClassCreate.proto */ -static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, PyObject *qualname, - PyObject *mkw, PyObject *modname, PyObject *doc); -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, PyObject *dict, - PyObject *mkw, int calculate_metaclass, int allow_py2_metaclass); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); -#endif - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -/* GCCDiagnostics.proto */ -#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* FormatTypeName.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -typedef PyObject *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%U" -static __Pyx_TypeName __Pyx_PyType_GetName(PyTypeObject* tp); -#define __Pyx_DECREF_TypeName(obj) Py_XDECREF(obj) -#else -typedef const char *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%.200s" -#define __Pyx_PyType_GetName(tp) ((tp)->tp_name) -#define __Pyx_DECREF_TypeName(obj) -#endif - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) __Pyx_IsAnySubtype2(Py_TYPE(obj), (PyTypeObject *)type1, (PyTypeObject *)type2) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) (PyObject_TypeCheck(obj, (PyTypeObject *)type1) || PyObject_TypeCheck(obj, (PyTypeObject *)type2)) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyErr_ExceptionMatches2(err1, err2) __Pyx_PyErr_GivenExceptionMatches2(__Pyx_PyErr_Occurred(), err1, err2) -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* CoroutineBase.proto */ -struct __pyx_CoroutineObject; -typedef PyObject *(*__pyx_coroutine_body_t)(struct __pyx_CoroutineObject *, PyThreadState *, PyObject *); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_ExcInfoStruct _PyErr_StackItem -#else -typedef struct { - PyObject *exc_type; - PyObject *exc_value; - PyObject *exc_traceback; -} __Pyx_ExcInfoStruct; -#endif -typedef struct __pyx_CoroutineObject { - PyObject_HEAD - __pyx_coroutine_body_t body; - PyObject *closure; - __Pyx_ExcInfoStruct gi_exc_state; - PyObject *gi_weakreflist; - PyObject *classobj; - PyObject *yieldfrom; - PyObject *gi_name; - PyObject *gi_qualname; - PyObject *gi_modulename; - PyObject *gi_code; - PyObject *gi_frame; - int resume_label; - char is_running; -} __pyx_CoroutineObject; -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject *type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static CYTHON_INLINE void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *self); -static int __Pyx_Coroutine_clear(PyObject *self); -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value); -static PyObject *__Pyx_Coroutine_Close(PyObject *self); -static PyObject *__Pyx_Coroutine_Throw(PyObject *gen, PyObject *args); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_Coroutine_SwapException(self) -#define __Pyx_Coroutine_ResetAndClearException(self) __Pyx_Coroutine_ExceptionClear(&(self)->gi_exc_state) -#else -#define __Pyx_Coroutine_SwapException(self) {\ - __Pyx_ExceptionSwap(&(self)->gi_exc_state.exc_type, &(self)->gi_exc_state.exc_value, &(self)->gi_exc_state.exc_traceback);\ - __Pyx_Coroutine_ResetFrameBackpointer(&(self)->gi_exc_state);\ - } -#define __Pyx_Coroutine_ResetAndClearException(self) {\ - __Pyx_ExceptionReset((self)->gi_exc_state.exc_type, (self)->gi_exc_state.exc_value, (self)->gi_exc_state.exc_traceback);\ - (self)->gi_exc_state.exc_type = (self)->gi_exc_state.exc_value = (self)->gi_exc_state.exc_traceback = NULL;\ - } -#endif -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__pyx_tstate, pvalue) -#else -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, pvalue) -#endif -static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *tstate, PyObject **pvalue); -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state); - -/* PatchModuleWithCoroutine.proto */ -static PyObject* __Pyx_Coroutine_patch_module(PyObject* module, const char* py_code); - -/* PatchGeneratorABC.proto */ -static int __Pyx_patch_abc(void); - -/* Generator.proto */ -#define __Pyx_Generator_USED -static PyTypeObject *__pyx_GeneratorType = 0; -#define __Pyx_Generator_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_GeneratorType) -#define __Pyx_Generator_New(body, code, closure, name, qualname, module_name)\ - __Pyx__Coroutine_New(__pyx_GeneratorType, body, code, closure, name, qualname, module_name) -static PyObject *__Pyx_Generator_Next(PyObject *self); -static int __pyx_Generator_init(PyObject *module); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_InitString(__Pyx_StringTabEntry t, PyObject **str); -#else -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); -#endif - -/* #### Code section: module_declarations ### */ - -/* Module declarations from "pdf_toolbox.lib.dia_yolov5.utils.datasets" */ -#if !CYTHON_USE_MODULE_STATE -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit = 0; -static PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr = 0; -#endif -/* #### Code section: typeinfo ### */ -/* #### Code section: before_global_var ### */ -#define __Pyx_MODULE_NAME "pdf_toolbox.lib.dia_yolov5.utils.datasets" -extern int __pyx_module_is_main_pdf_toolbox__lib__dia_yolov5__utils__datasets; -int __pyx_module_is_main_pdf_toolbox__lib__dia_yolov5__utils__datasets = 0; - -/* Implementation of "pdf_toolbox.lib.dia_yolov5.utils.datasets" */ -/* #### Code section: global_var ### */ -static PyObject *__pyx_builtin_sum; -static PyObject *__pyx_builtin_any; -static PyObject *__pyx_builtin_AssertionError; -static PyObject *__pyx_builtin_StopIteration; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_open; -static PyObject *__pyx_builtin_print; -static PyObject *__pyx_builtin_zip; -/* #### Code section: string_decls ### */ -static const char __pyx_k_[] = ""; -static const char __pyx_k_F[] = "F"; -static const char __pyx_k_a[] = "a"; -static const char __pyx_k_b[] = "b"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_f[] = "f"; -static const char __pyx_k_h[] = "h"; -static const char __pyx_k_i[] = "i"; -static const char __pyx_k_j[] = "j"; -static const char __pyx_k_k[] = "k"; -static const char __pyx_k_n[] = "n"; -static const char __pyx_k_p[] = "p"; -static const char __pyx_k_r[] = "r"; -static const char __pyx_k_s[] = "s"; -static const char __pyx_k_w[] = "w"; -static const char __pyx_k_x[] = "x"; -static const char __pyx_k__3[] = "*"; -static const char __pyx_k__4[] = "*.*"; -static const char __pyx_k__5[] = "."; -static const char __pyx_k__6[] = "/"; -static const char __pyx_k__7[] = " ("; -static const char __pyx_k__8[] = ") "; -static const char __pyx_k__9[] = ": "; -static const char __pyx_k_gc[] = "gc"; -static const char __pyx_k_h0[] = "h0"; -static const char __pyx_k_hp[] = "hp"; -static const char __pyx_k_im[] = "im"; -static const char __pyx_k_lb[] = "lb"; -static const char __pyx_k_nf[] = "nf"; -static const char __pyx_k_ni[] = "ni"; -static const char __pyx_k_nn[] = "nn"; -static const char __pyx_k_np[] = "np"; -static const char __pyx_k_nv[] = "nv"; -static const char __pyx_k_os[] = "os"; -static const char __pyx_k_sa[] = "sa"; -static const char __pyx_k_sb[] = "sb"; -static const char __pyx_k_w0[] = "w0"; -static const char __pyx_k_wp[] = "wp"; -static const char __pyx_k_x1[] = "x1"; -static const char __pyx_k_x2[] = "x2"; -static const char __pyx_k_xc[] = "xc"; -static const char __pyx_k_y1[] = "y1"; -static const char __pyx_k_y2[] = "y2"; -static const char __pyx_k_yc[] = "yc"; -static const char __pyx_k_PIL[] = "PIL"; -static const char __pyx_k__10[] = " "; -static const char __pyx_k__18[] = "/**/*.*"; -static const char __pyx_k__21[] = "_"; -static const char __pyx_k__25[] = "./"; -static const char __pyx_k__26[] = "\n"; -static const char __pyx_k__64[] = "?"; -static const char __pyx_k_any[] = "any"; -static const char __pyx_k_asf[] = "asf"; -static const char __pyx_k_avi[] = "avi"; -static const char __pyx_k_bmp[] = "bmp"; -static const char __pyx_k_cap[] = "cap"; -static const char __pyx_k_cv2[] = "cv2"; -static const char __pyx_k_dng[] = "dng"; -static const char __pyx_k_doc[] = "__doc__"; -static const char __pyx_k_get[] = "get"; -static const char __pyx_k_gif[] = "gif"; -static const char __pyx_k_img[] = "img"; -static const char __pyx_k_int[] = "int"; -static const char __pyx_k_jpg[] = ".jpg"; -static const char __pyx_k_len[] = "__len__"; -static const char __pyx_k_m4v[] = "m4v"; -static const char __pyx_k_md5[] = "md5"; -static const char __pyx_k_mkv[] = "mkv"; -static const char __pyx_k_mov[] = "mov"; -static const char __pyx_k_mp4[] = "mp4"; -static const char __pyx_k_mpg[] = "mpg"; -static const char __pyx_k_mpo[] = "mpo"; -static const char __pyx_k_new[] = "./new"; -static const char __pyx_k_npy[] = "npy"; -static const char __pyx_k_out[] = "out"; -static const char __pyx_k_png[] = "png"; -static const char __pyx_k_sep[] = "sep"; -static const char __pyx_k_sum[] = "sum"; -static const char __pyx_k_tif[] = "tif"; -static const char __pyx_k_txt[] = ".txt"; -static const char __pyx_k_wmv[] = "wmv"; -static const char __pyx_k_x1a[] = "x1a"; -static const char __pyx_k_x1b[] = "x1b"; -static const char __pyx_k_x2a[] = "x2a"; -static const char __pyx_k_x2b[] = "x2b"; -static const char __pyx_k_y1a[] = "y1a"; -static const char __pyx_k_y1b[] = "y1b"; -static const char __pyx_k_y2a[] = "y2a"; -static const char __pyx_k_y2b[] = "y2b"; -static const char __pyx_k_zip[] = "zip"; -static const char __pyx_k_Path[] = "Path"; -static const char __pyx_k_Pool[] = "Pool"; -static const char __pyx_k_TAGS[] = "TAGS"; -static const char __pyx_k_args[] = "args"; -static const char __pyx_k_auto[] = "auto"; -static const char __pyx_k_clip[] = "clip"; -static const char __pyx_k_copy[] = "copy"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_exif[] = "exif"; -static const char __pyx_k_exit[] = "__exit__"; -static const char __pyx_k_file[] = "file"; -static const char __pyx_k_flat[] = "_flat"; -static const char __pyx_k_full[] = "full"; -static const char __pyx_k_glob[] = "glob"; -static const char __pyx_k_img0[] = "img0"; -static const char __pyx_k_img4[] = "img4"; -static const char __pyx_k_img9[] = "img9"; -static const char __pyx_k_imgs[] = "imgs"; -static const char __pyx_k_info[] = "info"; -static const char __pyx_k_init[] = "__init__"; -static const char __pyx_k_iter[] = "__iter__"; -static const char __pyx_k_join[] = "join"; -static const char __pyx_k_jpeg[] = "jpeg"; -static const char __pyx_k_json[] = "json"; -static const char __pyx_k_keys[] = "keys"; -static const char __pyx_k_load[] = "load"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_math[] = "math"; -static const char __pyx_k_mode[] = "mode"; -static const char __pyx_k_mpeg[] = "mpeg"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_next[] = "__next__"; -static const char __pyx_k_open[] = "open"; -static const char __pyx_k_padh[] = "padh"; -static const char __pyx_k_padw[] = "padw"; -static const char __pyx_k_padx[] = "padx"; -static const char __pyx_k_pady[] = "pady"; -static const char __pyx_k_path[] = "path"; -static const char __pyx_k_read[] = "read"; -static const char __pyx_k_seed[] = "seed"; -static const char __pyx_k_self[] = "self"; -static const char __pyx_k_send[] = "send"; -static const char __pyx_k_size[] = "size"; -static const char __pyx_k_spec[] = "__spec__"; -static const char __pyx_k_stem[] = "stem"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_tiff[] = "tiff"; -static const char __pyx_k_time[] = "time"; -static const char __pyx_k_tqdm[] = "tqdm"; -static const char __pyx_k_webp[] = "webp"; -static const char __pyx_k_yaml[] = "yaml"; -static const char __pyx_k_ERROR[] = "ERROR: "; -static const char __pyx_k_Image[] = "Image"; -static const char __pyx_k_array[] = "array"; -static const char __pyx_k_close[] = "close"; -static const char __pyx_k_count[] = "count"; -static const char __pyx_k_dtype[] = "dtype"; -static const char __pyx_k_enter[] = "__enter__"; -static const char __pyx_k_files[] = "files"; -static const char __pyx_k_frame[] = "frame"; -static const char __pyx_k_image[] = "image"; -static const char __pyx_k_index[] = "index"; -static const char __pyx_k_isdir[] = "isdir"; -static const char __pyx_k_items[] = "items"; -static const char __pyx_k_jpg_2[] = "jpg"; -static const char __pyx_k_lower[] = "lower"; -static const char __pyx_k_mkdir[] = "mkdir"; -static const char __pyx_k_numpy[] = "numpy"; -static const char __pyx_k_paths[] = "paths"; -static const char __pyx_k_print[] = "print"; -static const char __pyx_k_ravel[] = "ravel"; -static const char __pyx_k_rglob[] = "rglob"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_split[] = "split"; -static const char __pyx_k_strip[] = "strip"; -static const char __pyx_k_super[] = "super"; -static const char __pyx_k_throw[] = "throw"; -static const char __pyx_k_torch[] = "torch"; -static const char __pyx_k_total[] = "total"; -static const char __pyx_k_txt_2[] = "txt"; -static const char __pyx_k_uint8[] = "uint8"; -static const char __pyx_k_video[] = "video"; -static const char __pyx_k_write[] = "write"; -static const char __pyx_k_Thread[] = "Thread"; -static const char __pyx_k_append[] = "append"; -static const char __pyx_k_astype[] = "astype"; -static const char __pyx_k_enable[] = "enable"; -static const char __pyx_k_encode[] = "encode"; -static const char __pyx_k_exists[] = "exists"; -static const char __pyx_k_frames[] = "frames"; -static const char __pyx_k_images[] = "images"; -static const char __pyx_k_img_hw[] = "img_hw"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_imread[] = "imread"; -static const char __pyx_k_is_dir[] = "is_dir"; -static const char __pyx_k_isfile[] = "isfile"; -static const char __pyx_k_labels[] = "labels"; -static const char __pyx_k_method[] = "method"; -static const char __pyx_k_module[] = "__module__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_parent[] = "parent"; -static const char __pyx_k_random[] = "random"; -static const char __pyx_k_repeat[] = "repeat"; -static const char __pyx_k_resize[] = "resize"; -static const char __pyx_k_rmtree[] = "rmtree"; -static const char __pyx_k_rsplit[] = "rsplit"; -static const char __pyx_k_shutil[] = "shutil"; -static const char __pyx_k_stride[] = "stride"; -static const char __pyx_k_suffix[] = "suffix"; -static const char __pyx_k_unlink[] = "unlink"; -static const char __pyx_k_update[] = "update"; -static const char __pyx_k_videos[] = "\nvideos: "; -static const char __pyx_k_xyn2xy[] = "xyn2xy"; -static const char __pyx_k_ZipFile[] = "ZipFile"; -static const char __pyx_k_augment[] = "augment"; -static const char __pyx_k_choices[] = "choices"; -static const char __pyx_k_disable[] = "disable"; -static const char __pyx_k_float32[] = "float32"; -static const char __pyx_k_genexpr[] = "genexpr"; -static const char __pyx_k_getexif[] = "_getexif"; -static const char __pyx_k_getsize[] = "getsize"; -static const char __pyx_k_hashlib[] = "hashlib"; -static const char __pyx_k_im_file[] = "im_file"; -static const char __pyx_k_image_2[] = "image "; -static const char __pyx_k_img_hw0[] = "img_hw0"; -static const char __pyx_k_img_npy[] = "img_npy"; -static const char __pyx_k_imwrite[] = "imwrite"; -static const char __pyx_k_indices[] = "indices"; -static const char __pyx_k_labels4[] = "labels4"; -static const char __pyx_k_labels9[] = "labels9"; -static const char __pyx_k_lb_file[] = "lb_file"; -static const char __pyx_k_parents[] = "parents"; -static const char __pyx_k_pathlib[] = "pathlib"; -static const char __pyx_k_prepare[] = "__prepare__"; -static const char __pyx_k_release[] = "release"; -static const char __pyx_k_reshape[] = "reshape"; -static const char __pyx_k_resolve[] = "resolve"; -static const char __pyx_k_ret_val[] = "ret_val"; -static const char __pyx_k_shuffle[] = "shuffle"; -static const char __pyx_k_tobytes[] = "tobytes"; -static const char __pyx_k_uniform[] = "uniform"; -static const char __pyx_k_video_2[] = "video "; -static const char __pyx_k_weights[] = "weights"; -static const char __pyx_k_zipfile[] = "zipfile"; -static const char __pyx_k_ExifTags[] = "ExifTags"; -static const char __pyx_k_HELP_URL[] = "HELP_URL"; -static const char __pyx_k_ImageOps[] = "ImageOps"; -static const char __pyx_k_as_posix[] = "as_posix"; -static const char __pyx_k_copyfile[] = "copyfile"; -static const char __pyx_k_get_hash[] = "get_hash"; -static const char __pyx_k_img_size[] = "img_size"; -static const char __pyx_k_makedirs[] = "makedirs"; -static const char __pyx_k_new_path[] = "new_path"; -static const char __pyx_k_qualname[] = "__qualname__"; -static const char __pyx_k_rotation[] = "rotation"; -static const char __pyx_k_segments[] = "segments"; -static const char __pyx_k_set_name[] = "__set_name__"; -static const char __pyx_k_videos_2[] = "videos"; -static const char __pyx_k_ROTATE_90[] = "ROTATE_90"; -static const char __pyx_k_TRANSPOSE[] = "TRANSPOSE"; -static const char __pyx_k_autosplit[] = "autosplit"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_exif_size[] = "exif_size"; -static const char __pyx_k_getexif_2[] = "getexif"; -static const char __pyx_k_hexdigest[] = "hexdigest"; -static const char __pyx_k_img_files[] = "img_files"; -static const char __pyx_k_img_paths[] = "img_paths"; -static const char __pyx_k_isenabled[] = "isenabled"; -static const char __pyx_k_itertools[] = "itertools"; -static const char __pyx_k_letterbox[] = "letterbox"; -static const char __pyx_k_metaclass[] = "__metaclass__"; -static const char __pyx_k_new_video[] = "new_video"; -static const char __pyx_k_recursive[] = "recursive"; -static const char __pyx_k_segments4[] = "segments4"; -static const char __pyx_k_segments9[] = "segments9"; -static const char __pyx_k_threading[] = "threading"; -static const char __pyx_k_transpose[] = "transpose"; -static const char __pyx_k_xywh2xyxy[] = "xywh2xyxy"; -static const char __pyx_k_INTER_AREA[] = "INTER_AREA"; -static const char __pyx_k_LoadImages[] = "LoadImages"; -static const char __pyx_k_ROTATE_180[] = "ROTATE_180"; -static const char __pyx_k_ROTATE_270[] = "ROTATE_270"; -static const char __pyx_k_TRANSVERSE[] = "TRANSVERSE"; -static const char __pyx_k_ThreadPool[] = "ThreadPool"; -static const char __pyx_k_classifier[] = "classifier"; -static const char __pyx_k_functional[] = "functional"; -static const char __pyx_k_load_image[] = "load_image"; -static const char __pyx_k_missing_ok[] = "missing_ok"; -static const char __pyx_k_splitlines[] = "splitlines"; -static const char __pyx_k_video_flag[] = "video_flag"; -static const char __pyx_k_xywhn2xyxy[] = "xywhn2xyxy"; -static const char __pyx_k_IMG_FORMATS[] = "IMG_FORMATS"; -static const char __pyx_k_Orientation[] = "Orientation"; -static const char __pyx_k_VID_FORMATS[] = "VID_FORMATS"; -static const char __pyx_k_concatenate[] = "concatenate"; -static const char __pyx_k_load_mosaic[] = "load_mosaic"; -static const char __pyx_k_orientation[] = "orientation"; -static const char __pyx_k_relative_to[] = "relative_to"; -static const char __pyx_k_INTER_LINEAR[] = "INTER_LINEAR"; -static const char __pyx_k_VideoCapture[] = "VideoCapture"; -static const char __pyx_k_initializing[] = "_initializing"; -static const char __pyx_k_is_coroutine[] = "_is_coroutine"; -static const char __pyx_k_load_mosaic9[] = "load_mosaic9"; -static const char __pyx_k_StopIteration[] = "StopIteration"; -static const char __pyx_k_class_getitem[] = "__class_getitem__"; -static const char __pyx_k_create_folder[] = "create_folder"; -static const char __pyx_k_extract_boxes[] = "extract_boxes"; -static const char __pyx_k_init_subclass[] = "__init_subclass__"; -static const char __pyx_k_interpolation[] = "interpolation"; -static const char __pyx_k_mosaic_border[] = "mosaic_border"; -static const char __pyx_k_AssertionError[] = "AssertionError"; -static const char __pyx_k_annotated_only[] = "annotated_only"; -static const char __pyx_k_box_failure_in[] = "box failure in "; -static const char __pyx_k_does_not_exist[] = " does not exist"; -static const char __pyx_k_exif_transpose[] = "exif_transpose"; -static const char __pyx_k_FLIP_LEFT_RIGHT[] = "FLIP_LEFT_RIGHT"; -static const char __pyx_k_FLIP_TOP_BOTTOM[] = "FLIP_TOP_BOTTOM"; -static const char __pyx_k_Image_Not_Found[] = "Image Not Found "; -static const char __pyx_k_img2label_paths[] = "img2label_paths"; -static const char __pyx_k_LoadImages___len[] = "LoadImages.__len__"; -static const char __pyx_k_datasets_coco128[] = "../datasets/coco128"; -static const char __pyx_k_LoadImages___init[] = "LoadImages.__init__"; -static const char __pyx_k_LoadImages___iter[] = "LoadImages.__iter__"; -static const char __pyx_k_LoadImages___next[] = "LoadImages.__next__"; -static const char __pyx_k_ascontiguousarray[] = "ascontiguousarray"; -static const char __pyx_k_autosplit_val_txt[] = "autosplit_val.txt"; -static const char __pyx_k_flatten_recursive[] = "flatten_recursive"; -static const char __pyx_k_asyncio_coroutines[] = "asyncio.coroutines"; -static const char __pyx_k_autosplit_test_txt[] = "autosplit_test.txt"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_autosplit_train_txt[] = "autosplit_train.txt"; -static const char __pyx_k_torch_nn_functional[] = "torch.nn.functional"; -static const char __pyx_k_CAP_PROP_FRAME_COUNT[] = "CAP_PROP_FRAME_COUNT"; -static const char __pyx_k_LoadImages_new_video[] = "LoadImages.new_video"; -static const char __pyx_k_multiprocessing_pool[] = "multiprocessing.pool"; -static const char __pyx_k_datasets_coco128_images[] = "../datasets/coco128/images"; -static const char __pyx_k_get_hash_locals_genexpr[] = "get_hash..genexpr"; -static const char __pyx_k_autosplit_locals_genexpr[] = "autosplit..genexpr"; -static const char __pyx_k_Autosplitting_images_from[] = "Autosplitting images from "; -static const char __pyx_k_load_mosaic_locals_genexpr[] = "load_mosaic..genexpr"; -static const char __pyx_k_load_mosaic9_locals_genexpr[] = "load_mosaic9..genexpr"; -static const char __pyx_k_No_images_or_videos_found_in[] = "No images or videos found in "; -static const char __pyx_k_Supported_formats_are_images[] = ". Supported formats are:\nimages: "; -static const char __pyx_k_Dataloaders_and_dataset_utils[] = "\nDataloaders and dataset utils\n"; -static const char __pyx_k_using_txt_labeled_images_only[] = ", using *.txt labeled images only"; -static const char __pyx_k_https_github_com_ultralytics_yol[] = "https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_utils[] = "pdf_toolbox.lib.dia_yolov5.utils.datasets"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2[] = "pdf_toolbox.lib.dia_yolov5.utils.augmentations"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3[] = "pdf_toolbox.lib.dia_yolov5.utils.general"; -static const char __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_4[] = "pdf_toolbox\\lib\\dia_yolov5\\utils\\datasets.py"; -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_kp_u_; -static PyObject *__pyx_n_s_AssertionError; -static PyObject *__pyx_kp_u_Autosplitting_images_from; -static PyObject *__pyx_n_s_CAP_PROP_FRAME_COUNT; -static PyObject *__pyx_kp_u_ERROR; -static PyObject *__pyx_n_s_ExifTags; -static PyObject *__pyx_n_s_F; -static PyObject *__pyx_n_s_FLIP_LEFT_RIGHT; -static PyObject *__pyx_n_s_FLIP_TOP_BOTTOM; -static PyObject *__pyx_n_s_HELP_URL; -static PyObject *__pyx_n_s_IMG_FORMATS; -static PyObject *__pyx_n_s_INTER_AREA; -static PyObject *__pyx_n_s_INTER_LINEAR; -static PyObject *__pyx_n_s_Image; -static PyObject *__pyx_n_s_ImageOps; -static PyObject *__pyx_kp_u_Image_Not_Found; -static PyObject *__pyx_n_s_LoadImages; -static PyObject *__pyx_n_s_LoadImages___init; -static PyObject *__pyx_n_s_LoadImages___iter; -static PyObject *__pyx_n_s_LoadImages___len; -static PyObject *__pyx_n_s_LoadImages___next; -static PyObject *__pyx_n_s_LoadImages_new_video; -static PyObject *__pyx_kp_u_No_images_or_videos_found_in; -static PyObject *__pyx_n_u_Orientation; -static PyObject *__pyx_n_s_PIL; -static PyObject *__pyx_n_s_Path; -static PyObject *__pyx_n_s_Pool; -static PyObject *__pyx_n_s_ROTATE_180; -static PyObject *__pyx_n_s_ROTATE_270; -static PyObject *__pyx_n_s_ROTATE_90; -static PyObject *__pyx_n_s_StopIteration; -static PyObject *__pyx_kp_u_Supported_formats_are_images; -static PyObject *__pyx_n_s_TAGS; -static PyObject *__pyx_n_s_TRANSPOSE; -static PyObject *__pyx_n_s_TRANSVERSE; -static PyObject *__pyx_n_s_Thread; -static PyObject *__pyx_n_s_ThreadPool; -static PyObject *__pyx_n_s_VID_FORMATS; -static PyObject *__pyx_n_s_VideoCapture; -static PyObject *__pyx_n_s_ZipFile; -static PyObject *__pyx_kp_u__10; -static PyObject *__pyx_kp_u__18; -static PyObject *__pyx_n_s__21; -static PyObject *__pyx_n_u__21; -static PyObject *__pyx_kp_u__25; -static PyObject *__pyx_kp_u__26; -static PyObject *__pyx_n_s__3; -static PyObject *__pyx_kp_u__3; -static PyObject *__pyx_kp_u__4; -static PyObject *__pyx_kp_u__5; -static PyObject *__pyx_kp_u__6; -static PyObject *__pyx_n_s__64; -static PyObject *__pyx_kp_u__7; -static PyObject *__pyx_kp_u__8; -static PyObject *__pyx_kp_u__9; -static PyObject *__pyx_n_u_a; -static PyObject *__pyx_n_s_annotated_only; -static PyObject *__pyx_n_s_any; -static PyObject *__pyx_n_s_append; -static PyObject *__pyx_n_s_args; -static PyObject *__pyx_n_s_array; -static PyObject *__pyx_n_s_as_posix; -static PyObject *__pyx_n_s_ascontiguousarray; -static PyObject *__pyx_n_u_asf; -static PyObject *__pyx_n_s_astype; -static PyObject *__pyx_n_s_asyncio_coroutines; -static PyObject *__pyx_n_s_augment; -static PyObject *__pyx_n_s_auto; -static PyObject *__pyx_n_s_autosplit; -static PyObject *__pyx_n_s_autosplit_locals_genexpr; -static PyObject *__pyx_kp_u_autosplit_test_txt; -static PyObject *__pyx_kp_u_autosplit_train_txt; -static PyObject *__pyx_kp_u_autosplit_val_txt; -static PyObject *__pyx_n_u_avi; -static PyObject *__pyx_n_s_b; -static PyObject *__pyx_n_u_bmp; -static PyObject *__pyx_kp_u_box_failure_in; -static PyObject *__pyx_n_s_c; -static PyObject *__pyx_n_s_cap; -static PyObject *__pyx_n_s_choices; -static PyObject *__pyx_n_s_class_getitem; -static PyObject *__pyx_n_u_classifier; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_n_s_clip; -static PyObject *__pyx_n_s_close; -static PyObject *__pyx_n_s_concatenate; -static PyObject *__pyx_n_s_copy; -static PyObject *__pyx_n_s_copyfile; -static PyObject *__pyx_n_s_count; -static PyObject *__pyx_n_s_create_folder; -static PyObject *__pyx_n_s_cv2; -static PyObject *__pyx_kp_u_datasets_coco128; -static PyObject *__pyx_kp_u_datasets_coco128_images; -static PyObject *__pyx_n_s_dict; -static PyObject *__pyx_kp_u_disable; -static PyObject *__pyx_n_u_dng; -static PyObject *__pyx_n_s_doc; -static PyObject *__pyx_kp_u_does_not_exist; -static PyObject *__pyx_n_s_dtype; -static PyObject *__pyx_kp_u_enable; -static PyObject *__pyx_n_s_encode; -static PyObject *__pyx_n_s_enter; -static PyObject *__pyx_n_s_enumerate; -static PyObject *__pyx_n_s_exif; -static PyObject *__pyx_n_u_exif; -static PyObject *__pyx_n_s_exif_size; -static PyObject *__pyx_n_s_exif_transpose; -static PyObject *__pyx_n_s_exists; -static PyObject *__pyx_n_s_exit; -static PyObject *__pyx_n_s_extract_boxes; -static PyObject *__pyx_n_s_f; -static PyObject *__pyx_n_s_file; -static PyObject *__pyx_n_s_files; -static PyObject *__pyx_n_u_flat; -static PyObject *__pyx_n_s_flatten_recursive; -static PyObject *__pyx_n_s_float32; -static PyObject *__pyx_n_s_frame; -static PyObject *__pyx_n_s_frames; -static PyObject *__pyx_n_s_full; -static PyObject *__pyx_n_s_functional; -static PyObject *__pyx_kp_u_gc; -static PyObject *__pyx_n_s_genexpr; -static PyObject *__pyx_n_s_get; -static PyObject *__pyx_n_s_get_hash; -static PyObject *__pyx_n_s_get_hash_locals_genexpr; -static PyObject *__pyx_n_s_getexif; -static PyObject *__pyx_n_s_getexif_2; -static PyObject *__pyx_n_s_getsize; -static PyObject *__pyx_n_u_gif; -static PyObject *__pyx_n_s_glob; -static PyObject *__pyx_n_s_h; -static PyObject *__pyx_n_s_h0; -static PyObject *__pyx_n_s_hashlib; -static PyObject *__pyx_n_s_hexdigest; -static PyObject *__pyx_n_s_hp; -static PyObject *__pyx_kp_u_https_github_com_ultralytics_yol; -static PyObject *__pyx_n_s_i; -static PyObject *__pyx_n_s_im; -static PyObject *__pyx_n_s_im_file; -static PyObject *__pyx_n_s_image; -static PyObject *__pyx_n_u_image; -static PyObject *__pyx_kp_u_image_2; -static PyObject *__pyx_n_s_images; -static PyObject *__pyx_n_u_images; -static PyObject *__pyx_n_s_img; -static PyObject *__pyx_n_s_img0; -static PyObject *__pyx_n_s_img2label_paths; -static PyObject *__pyx_n_s_img4; -static PyObject *__pyx_n_s_img9; -static PyObject *__pyx_n_s_img_files; -static PyObject *__pyx_n_s_img_hw; -static PyObject *__pyx_n_s_img_hw0; -static PyObject *__pyx_n_s_img_npy; -static PyObject *__pyx_n_s_img_paths; -static PyObject *__pyx_n_s_img_size; -static PyObject *__pyx_n_s_imgs; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_imread; -static PyObject *__pyx_n_s_imwrite; -static PyObject *__pyx_n_s_index; -static PyObject *__pyx_n_s_indices; -static PyObject *__pyx_n_s_info; -static PyObject *__pyx_n_s_init; -static PyObject *__pyx_n_s_init_subclass; -static PyObject *__pyx_n_s_initializing; -static PyObject *__pyx_n_s_int; -static PyObject *__pyx_n_s_interpolation; -static PyObject *__pyx_n_s_is_coroutine; -static PyObject *__pyx_n_s_is_dir; -static PyObject *__pyx_n_s_isdir; -static PyObject *__pyx_kp_u_isenabled; -static PyObject *__pyx_n_s_isfile; -static PyObject *__pyx_n_s_items; -static PyObject *__pyx_n_s_iter; -static PyObject *__pyx_n_s_itertools; -static PyObject *__pyx_n_s_j; -static PyObject *__pyx_n_s_join; -static PyObject *__pyx_n_u_jpeg; -static PyObject *__pyx_kp_u_jpg; -static PyObject *__pyx_n_u_jpg_2; -static PyObject *__pyx_n_s_json; -static PyObject *__pyx_n_s_k; -static PyObject *__pyx_n_s_keys; -static PyObject *__pyx_n_s_labels; -static PyObject *__pyx_n_u_labels; -static PyObject *__pyx_n_s_labels4; -static PyObject *__pyx_n_s_labels9; -static PyObject *__pyx_n_s_lb; -static PyObject *__pyx_n_s_lb_file; -static PyObject *__pyx_n_s_len; -static PyObject *__pyx_n_s_letterbox; -static PyObject *__pyx_n_s_load; -static PyObject *__pyx_n_s_load_image; -static PyObject *__pyx_n_s_load_mosaic; -static PyObject *__pyx_n_s_load_mosaic9; -static PyObject *__pyx_n_s_load_mosaic9_locals_genexpr; -static PyObject *__pyx_n_s_load_mosaic_locals_genexpr; -static PyObject *__pyx_n_s_lower; -static PyObject *__pyx_n_u_m4v; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_s_makedirs; -static PyObject *__pyx_n_s_math; -static PyObject *__pyx_n_s_md5; -static PyObject *__pyx_n_s_metaclass; -static PyObject *__pyx_n_s_method; -static PyObject *__pyx_n_s_missing_ok; -static PyObject *__pyx_n_s_mkdir; -static PyObject *__pyx_n_u_mkv; -static PyObject *__pyx_n_s_mode; -static PyObject *__pyx_n_s_module; -static PyObject *__pyx_n_s_mosaic_border; -static PyObject *__pyx_n_u_mov; -static PyObject *__pyx_n_u_mp4; -static PyObject *__pyx_n_u_mpeg; -static PyObject *__pyx_n_u_mpg; -static PyObject *__pyx_n_u_mpo; -static PyObject *__pyx_n_s_multiprocessing_pool; -static PyObject *__pyx_n_s_n; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_name_2; -static PyObject *__pyx_kp_u_new; -static PyObject *__pyx_n_s_new_path; -static PyObject *__pyx_n_s_new_video; -static PyObject *__pyx_n_s_next; -static PyObject *__pyx_n_s_nf; -static PyObject *__pyx_n_s_ni; -static PyObject *__pyx_n_s_nn; -static PyObject *__pyx_n_s_np; -static PyObject *__pyx_n_s_npy; -static PyObject *__pyx_n_s_numpy; -static PyObject *__pyx_n_s_nv; -static PyObject *__pyx_n_s_open; -static PyObject *__pyx_n_s_orientation; -static PyObject *__pyx_n_s_os; -static PyObject *__pyx_n_s_out; -static PyObject *__pyx_n_s_p; -static PyObject *__pyx_n_s_padh; -static PyObject *__pyx_n_s_padw; -static PyObject *__pyx_n_s_padx; -static PyObject *__pyx_n_s_pady; -static PyObject *__pyx_n_s_parent; -static PyObject *__pyx_n_s_parents; -static PyObject *__pyx_n_s_path; -static PyObject *__pyx_n_s_pathlib; -static PyObject *__pyx_n_s_paths; -static PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils; -static PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2; -static PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3; -static PyObject *__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4; -static PyObject *__pyx_n_u_png; -static PyObject *__pyx_n_s_prepare; -static PyObject *__pyx_n_s_print; -static PyObject *__pyx_n_s_qualname; -static PyObject *__pyx_n_s_r; -static PyObject *__pyx_n_s_random; -static PyObject *__pyx_n_s_ravel; -static PyObject *__pyx_n_s_read; -static PyObject *__pyx_n_s_recursive; -static PyObject *__pyx_n_s_relative_to; -static PyObject *__pyx_n_s_release; -static PyObject *__pyx_n_s_repeat; -static PyObject *__pyx_n_s_reshape; -static PyObject *__pyx_n_s_resize; -static PyObject *__pyx_n_s_resolve; -static PyObject *__pyx_n_s_ret_val; -static PyObject *__pyx_n_s_rglob; -static PyObject *__pyx_n_s_rmtree; -static PyObject *__pyx_n_s_rotation; -static PyObject *__pyx_n_s_rsplit; -static PyObject *__pyx_n_s_s; -static PyObject *__pyx_n_s_sa; -static PyObject *__pyx_n_s_sb; -static PyObject *__pyx_n_s_seed; -static PyObject *__pyx_n_s_segments; -static PyObject *__pyx_n_s_segments4; -static PyObject *__pyx_n_s_segments9; -static PyObject *__pyx_n_s_self; -static PyObject *__pyx_n_s_send; -static PyObject *__pyx_n_s_sep; -static PyObject *__pyx_n_s_set_name; -static PyObject *__pyx_n_s_shape; -static PyObject *__pyx_n_s_shuffle; -static PyObject *__pyx_n_s_shutil; -static PyObject *__pyx_n_s_size; -static PyObject *__pyx_n_s_spec; -static PyObject *__pyx_n_s_split; -static PyObject *__pyx_n_s_splitlines; -static PyObject *__pyx_n_s_stem; -static PyObject *__pyx_n_s_stride; -static PyObject *__pyx_n_s_strip; -static PyObject *__pyx_n_s_suffix; -static PyObject *__pyx_n_s_sum; -static PyObject *__pyx_n_s_super; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_n_s_threading; -static PyObject *__pyx_n_s_throw; -static PyObject *__pyx_n_u_tif; -static PyObject *__pyx_n_u_tiff; -static PyObject *__pyx_n_s_time; -static PyObject *__pyx_n_s_tobytes; -static PyObject *__pyx_n_s_torch; -static PyObject *__pyx_n_s_torch_nn_functional; -static PyObject *__pyx_n_s_total; -static PyObject *__pyx_n_s_tqdm; -static PyObject *__pyx_n_s_transpose; -static PyObject *__pyx_kp_u_txt; -static PyObject *__pyx_n_s_txt_2; -static PyObject *__pyx_n_s_uint8; -static PyObject *__pyx_n_s_uniform; -static PyObject *__pyx_n_s_unlink; -static PyObject *__pyx_n_s_update; -static PyObject *__pyx_kp_u_using_txt_labeled_images_only; -static PyObject *__pyx_n_u_video; -static PyObject *__pyx_kp_u_video_2; -static PyObject *__pyx_n_s_video_flag; -static PyObject *__pyx_kp_u_videos; -static PyObject *__pyx_n_s_videos_2; -static PyObject *__pyx_n_s_w; -static PyObject *__pyx_n_s_w0; -static PyObject *__pyx_n_u_webp; -static PyObject *__pyx_n_s_weights; -static PyObject *__pyx_n_u_wmv; -static PyObject *__pyx_n_s_wp; -static PyObject *__pyx_n_s_write; -static PyObject *__pyx_n_s_x; -static PyObject *__pyx_n_s_x1; -static PyObject *__pyx_n_s_x1a; -static PyObject *__pyx_n_s_x1b; -static PyObject *__pyx_n_s_x2; -static PyObject *__pyx_n_s_x2a; -static PyObject *__pyx_n_s_x2b; -static PyObject *__pyx_n_s_xc; -static PyObject *__pyx_n_s_xyn2xy; -static PyObject *__pyx_n_s_xywh2xyxy; -static PyObject *__pyx_n_s_xywhn2xyxy; -static PyObject *__pyx_n_s_y1; -static PyObject *__pyx_n_s_y1a; -static PyObject *__pyx_n_s_y1b; -static PyObject *__pyx_n_s_y2; -static PyObject *__pyx_n_s_y2a; -static PyObject *__pyx_n_s_y2b; -static PyObject *__pyx_n_s_yaml; -static PyObject *__pyx_n_s_yc; -static PyObject *__pyx_n_s_zip; -static PyObject *__pyx_n_s_zipfile; -#endif -/* #### Code section: decls ### */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8get_hash_genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_get_hash(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_paths); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_2exif_size(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_img); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_4exif_transpose(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_image); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_path, PyObject *__pyx_v_img_size, PyObject *__pyx_v_stride, PyObject *__pyx_v_auto); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_2__iter__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_4__next__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_6new_video(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_path); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_8__len__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_6img2label_paths(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_img_paths); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8load_image(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_i); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic_genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10load_mosaic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_3genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_14create_folder(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_path); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_16flatten_recursive(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_path); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_18extract_boxes(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_path); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9autosplit_genexpr(PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_20autosplit(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_path, PyObject *__pyx_v_weights, PyObject *__pyx_v_annotated_only); /* proto */ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static __Pyx_CachedCFunction __pyx_umethod_PyDict_Type_get = {0, 0, 0, 0, 0}; -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_float_0_0; -static PyObject *__pyx_float_0_1; -static PyObject *__pyx_float_0_9; -static PyObject *__pyx_float_1_2; -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_2; -static PyObject *__pyx_int_3; -static PyObject *__pyx_int_4; -static PyObject *__pyx_int_5; -static PyObject *__pyx_int_6; -static PyObject *__pyx_int_7; -static PyObject *__pyx_int_8; -static PyObject *__pyx_int_32; -static PyObject *__pyx_int_114; -static PyObject *__pyx_int_274; -static PyObject *__pyx_int_640; -static PyObject *__pyx_int_neg_1; -#endif -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_tuple__2; -static PyObject *__pyx_slice__12; -static PyObject *__pyx_slice__14; -static PyObject *__pyx_slice__15; -static PyObject *__pyx_slice__16; -static PyObject *__pyx_slice__22; -static PyObject *__pyx_tuple__11; -static PyObject *__pyx_tuple__13; -static PyObject *__pyx_tuple__17; -static PyObject *__pyx_tuple__19; -static PyObject *__pyx_tuple__20; -static PyObject *__pyx_tuple__23; -static PyObject *__pyx_tuple__24; -static PyObject *__pyx_tuple__27; -static PyObject *__pyx_tuple__28; -static PyObject *__pyx_tuple__30; -static PyObject *__pyx_tuple__32; -static PyObject *__pyx_tuple__34; -static PyObject *__pyx_tuple__36; -static PyObject *__pyx_tuple__37; -static PyObject *__pyx_tuple__39; -static PyObject *__pyx_tuple__41; -static PyObject *__pyx_tuple__44; -static PyObject *__pyx_tuple__46; -static PyObject *__pyx_tuple__48; -static PyObject *__pyx_tuple__50; -static PyObject *__pyx_tuple__52; -static PyObject *__pyx_tuple__54; -static PyObject *__pyx_tuple__55; -static PyObject *__pyx_tuple__57; -static PyObject *__pyx_tuple__58; -static PyObject *__pyx_tuple__60; -static PyObject *__pyx_tuple__61; -static PyObject *__pyx_tuple__63; -static PyObject *__pyx_codeobj__29; -static PyObject *__pyx_codeobj__31; -static PyObject *__pyx_codeobj__33; -static PyObject *__pyx_codeobj__35; -static PyObject *__pyx_codeobj__38; -static PyObject *__pyx_codeobj__40; -static PyObject *__pyx_codeobj__42; -static PyObject *__pyx_codeobj__43; -static PyObject *__pyx_codeobj__45; -static PyObject *__pyx_codeobj__47; -static PyObject *__pyx_codeobj__49; -static PyObject *__pyx_codeobj__51; -static PyObject *__pyx_codeobj__53; -static PyObject *__pyx_codeobj__56; -static PyObject *__pyx_codeobj__59; -static PyObject *__pyx_codeobj__62; -#endif -/* #### Code section: late_includes ### */ -/* #### Code section: module_state ### */ -#if CYTHON_USE_MODULE_STATE -typedef struct { - PyObject *__pyx_d; - PyObject *__pyx_b; - PyObject *__pyx_cython_runtime; - PyObject *__pyx_empty_tuple; - PyObject *__pyx_empty_bytes; - PyObject *__pyx_empty_unicode; - #ifdef __Pyx_CyFunction_USED - PyTypeObject *__pyx_CyFunctionType; - #endif - #ifdef __Pyx_FusedFunction_USED - PyTypeObject *__pyx_FusedFunctionType; - #endif - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit; - PyTypeObject *__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr; - PyObject *__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr; - PyObject *__pyx_kp_u_; - PyObject *__pyx_n_s_AssertionError; - PyObject *__pyx_kp_u_Autosplitting_images_from; - PyObject *__pyx_n_s_CAP_PROP_FRAME_COUNT; - PyObject *__pyx_kp_u_ERROR; - PyObject *__pyx_n_s_ExifTags; - PyObject *__pyx_n_s_F; - PyObject *__pyx_n_s_FLIP_LEFT_RIGHT; - PyObject *__pyx_n_s_FLIP_TOP_BOTTOM; - PyObject *__pyx_n_s_HELP_URL; - PyObject *__pyx_n_s_IMG_FORMATS; - PyObject *__pyx_n_s_INTER_AREA; - PyObject *__pyx_n_s_INTER_LINEAR; - PyObject *__pyx_n_s_Image; - PyObject *__pyx_n_s_ImageOps; - PyObject *__pyx_kp_u_Image_Not_Found; - PyObject *__pyx_n_s_LoadImages; - PyObject *__pyx_n_s_LoadImages___init; - PyObject *__pyx_n_s_LoadImages___iter; - PyObject *__pyx_n_s_LoadImages___len; - PyObject *__pyx_n_s_LoadImages___next; - PyObject *__pyx_n_s_LoadImages_new_video; - PyObject *__pyx_kp_u_No_images_or_videos_found_in; - PyObject *__pyx_n_u_Orientation; - PyObject *__pyx_n_s_PIL; - PyObject *__pyx_n_s_Path; - PyObject *__pyx_n_s_Pool; - PyObject *__pyx_n_s_ROTATE_180; - PyObject *__pyx_n_s_ROTATE_270; - PyObject *__pyx_n_s_ROTATE_90; - PyObject *__pyx_n_s_StopIteration; - PyObject *__pyx_kp_u_Supported_formats_are_images; - PyObject *__pyx_n_s_TAGS; - PyObject *__pyx_n_s_TRANSPOSE; - PyObject *__pyx_n_s_TRANSVERSE; - PyObject *__pyx_n_s_Thread; - PyObject *__pyx_n_s_ThreadPool; - PyObject *__pyx_n_s_VID_FORMATS; - PyObject *__pyx_n_s_VideoCapture; - PyObject *__pyx_n_s_ZipFile; - PyObject *__pyx_kp_u__10; - PyObject *__pyx_kp_u__18; - PyObject *__pyx_n_s__21; - PyObject *__pyx_n_u__21; - PyObject *__pyx_kp_u__25; - PyObject *__pyx_kp_u__26; - PyObject *__pyx_n_s__3; - PyObject *__pyx_kp_u__3; - PyObject *__pyx_kp_u__4; - PyObject *__pyx_kp_u__5; - PyObject *__pyx_kp_u__6; - PyObject *__pyx_n_s__64; - PyObject *__pyx_kp_u__7; - PyObject *__pyx_kp_u__8; - PyObject *__pyx_kp_u__9; - PyObject *__pyx_n_u_a; - PyObject *__pyx_n_s_annotated_only; - PyObject *__pyx_n_s_any; - PyObject *__pyx_n_s_append; - PyObject *__pyx_n_s_args; - PyObject *__pyx_n_s_array; - PyObject *__pyx_n_s_as_posix; - PyObject *__pyx_n_s_ascontiguousarray; - PyObject *__pyx_n_u_asf; - PyObject *__pyx_n_s_astype; - PyObject *__pyx_n_s_asyncio_coroutines; - PyObject *__pyx_n_s_augment; - PyObject *__pyx_n_s_auto; - PyObject *__pyx_n_s_autosplit; - PyObject *__pyx_n_s_autosplit_locals_genexpr; - PyObject *__pyx_kp_u_autosplit_test_txt; - PyObject *__pyx_kp_u_autosplit_train_txt; - PyObject *__pyx_kp_u_autosplit_val_txt; - PyObject *__pyx_n_u_avi; - PyObject *__pyx_n_s_b; - PyObject *__pyx_n_u_bmp; - PyObject *__pyx_kp_u_box_failure_in; - PyObject *__pyx_n_s_c; - PyObject *__pyx_n_s_cap; - PyObject *__pyx_n_s_choices; - PyObject *__pyx_n_s_class_getitem; - PyObject *__pyx_n_u_classifier; - PyObject *__pyx_n_s_cline_in_traceback; - PyObject *__pyx_n_s_clip; - PyObject *__pyx_n_s_close; - PyObject *__pyx_n_s_concatenate; - PyObject *__pyx_n_s_copy; - PyObject *__pyx_n_s_copyfile; - PyObject *__pyx_n_s_count; - PyObject *__pyx_n_s_create_folder; - PyObject *__pyx_n_s_cv2; - PyObject *__pyx_kp_u_datasets_coco128; - PyObject *__pyx_kp_u_datasets_coco128_images; - PyObject *__pyx_n_s_dict; - PyObject *__pyx_kp_u_disable; - PyObject *__pyx_n_u_dng; - PyObject *__pyx_n_s_doc; - PyObject *__pyx_kp_u_does_not_exist; - PyObject *__pyx_n_s_dtype; - PyObject *__pyx_kp_u_enable; - PyObject *__pyx_n_s_encode; - PyObject *__pyx_n_s_enter; - PyObject *__pyx_n_s_enumerate; - PyObject *__pyx_n_s_exif; - PyObject *__pyx_n_u_exif; - PyObject *__pyx_n_s_exif_size; - PyObject *__pyx_n_s_exif_transpose; - PyObject *__pyx_n_s_exists; - PyObject *__pyx_n_s_exit; - PyObject *__pyx_n_s_extract_boxes; - PyObject *__pyx_n_s_f; - PyObject *__pyx_n_s_file; - PyObject *__pyx_n_s_files; - PyObject *__pyx_n_u_flat; - PyObject *__pyx_n_s_flatten_recursive; - PyObject *__pyx_n_s_float32; - PyObject *__pyx_n_s_frame; - PyObject *__pyx_n_s_frames; - PyObject *__pyx_n_s_full; - PyObject *__pyx_n_s_functional; - PyObject *__pyx_kp_u_gc; - PyObject *__pyx_n_s_genexpr; - PyObject *__pyx_n_s_get; - PyObject *__pyx_n_s_get_hash; - PyObject *__pyx_n_s_get_hash_locals_genexpr; - PyObject *__pyx_n_s_getexif; - PyObject *__pyx_n_s_getexif_2; - PyObject *__pyx_n_s_getsize; - PyObject *__pyx_n_u_gif; - PyObject *__pyx_n_s_glob; - PyObject *__pyx_n_s_h; - PyObject *__pyx_n_s_h0; - PyObject *__pyx_n_s_hashlib; - PyObject *__pyx_n_s_hexdigest; - PyObject *__pyx_n_s_hp; - PyObject *__pyx_kp_u_https_github_com_ultralytics_yol; - PyObject *__pyx_n_s_i; - PyObject *__pyx_n_s_im; - PyObject *__pyx_n_s_im_file; - PyObject *__pyx_n_s_image; - PyObject *__pyx_n_u_image; - PyObject *__pyx_kp_u_image_2; - PyObject *__pyx_n_s_images; - PyObject *__pyx_n_u_images; - PyObject *__pyx_n_s_img; - PyObject *__pyx_n_s_img0; - PyObject *__pyx_n_s_img2label_paths; - PyObject *__pyx_n_s_img4; - PyObject *__pyx_n_s_img9; - PyObject *__pyx_n_s_img_files; - PyObject *__pyx_n_s_img_hw; - PyObject *__pyx_n_s_img_hw0; - PyObject *__pyx_n_s_img_npy; - PyObject *__pyx_n_s_img_paths; - PyObject *__pyx_n_s_img_size; - PyObject *__pyx_n_s_imgs; - PyObject *__pyx_n_s_import; - PyObject *__pyx_n_s_imread; - PyObject *__pyx_n_s_imwrite; - PyObject *__pyx_n_s_index; - PyObject *__pyx_n_s_indices; - PyObject *__pyx_n_s_info; - PyObject *__pyx_n_s_init; - PyObject *__pyx_n_s_init_subclass; - PyObject *__pyx_n_s_initializing; - PyObject *__pyx_n_s_int; - PyObject *__pyx_n_s_interpolation; - PyObject *__pyx_n_s_is_coroutine; - PyObject *__pyx_n_s_is_dir; - PyObject *__pyx_n_s_isdir; - PyObject *__pyx_kp_u_isenabled; - PyObject *__pyx_n_s_isfile; - PyObject *__pyx_n_s_items; - PyObject *__pyx_n_s_iter; - PyObject *__pyx_n_s_itertools; - PyObject *__pyx_n_s_j; - PyObject *__pyx_n_s_join; - PyObject *__pyx_n_u_jpeg; - PyObject *__pyx_kp_u_jpg; - PyObject *__pyx_n_u_jpg_2; - PyObject *__pyx_n_s_json; - PyObject *__pyx_n_s_k; - PyObject *__pyx_n_s_keys; - PyObject *__pyx_n_s_labels; - PyObject *__pyx_n_u_labels; - PyObject *__pyx_n_s_labels4; - PyObject *__pyx_n_s_labels9; - PyObject *__pyx_n_s_lb; - PyObject *__pyx_n_s_lb_file; - PyObject *__pyx_n_s_len; - PyObject *__pyx_n_s_letterbox; - PyObject *__pyx_n_s_load; - PyObject *__pyx_n_s_load_image; - PyObject *__pyx_n_s_load_mosaic; - PyObject *__pyx_n_s_load_mosaic9; - PyObject *__pyx_n_s_load_mosaic9_locals_genexpr; - PyObject *__pyx_n_s_load_mosaic_locals_genexpr; - PyObject *__pyx_n_s_lower; - PyObject *__pyx_n_u_m4v; - PyObject *__pyx_n_s_main; - PyObject *__pyx_n_s_makedirs; - PyObject *__pyx_n_s_math; - PyObject *__pyx_n_s_md5; - PyObject *__pyx_n_s_metaclass; - PyObject *__pyx_n_s_method; - PyObject *__pyx_n_s_missing_ok; - PyObject *__pyx_n_s_mkdir; - PyObject *__pyx_n_u_mkv; - PyObject *__pyx_n_s_mode; - PyObject *__pyx_n_s_module; - PyObject *__pyx_n_s_mosaic_border; - PyObject *__pyx_n_u_mov; - PyObject *__pyx_n_u_mp4; - PyObject *__pyx_n_u_mpeg; - PyObject *__pyx_n_u_mpg; - PyObject *__pyx_n_u_mpo; - PyObject *__pyx_n_s_multiprocessing_pool; - PyObject *__pyx_n_s_n; - PyObject *__pyx_n_s_name; - PyObject *__pyx_n_s_name_2; - PyObject *__pyx_kp_u_new; - PyObject *__pyx_n_s_new_path; - PyObject *__pyx_n_s_new_video; - PyObject *__pyx_n_s_next; - PyObject *__pyx_n_s_nf; - PyObject *__pyx_n_s_ni; - PyObject *__pyx_n_s_nn; - PyObject *__pyx_n_s_np; - PyObject *__pyx_n_s_npy; - PyObject *__pyx_n_s_numpy; - PyObject *__pyx_n_s_nv; - PyObject *__pyx_n_s_open; - PyObject *__pyx_n_s_orientation; - PyObject *__pyx_n_s_os; - PyObject *__pyx_n_s_out; - PyObject *__pyx_n_s_p; - PyObject *__pyx_n_s_padh; - PyObject *__pyx_n_s_padw; - PyObject *__pyx_n_s_padx; - PyObject *__pyx_n_s_pady; - PyObject *__pyx_n_s_parent; - PyObject *__pyx_n_s_parents; - PyObject *__pyx_n_s_path; - PyObject *__pyx_n_s_pathlib; - PyObject *__pyx_n_s_paths; - PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils; - PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2; - PyObject *__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3; - PyObject *__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4; - PyObject *__pyx_n_u_png; - PyObject *__pyx_n_s_prepare; - PyObject *__pyx_n_s_print; - PyObject *__pyx_n_s_qualname; - PyObject *__pyx_n_s_r; - PyObject *__pyx_n_s_random; - PyObject *__pyx_n_s_ravel; - PyObject *__pyx_n_s_read; - PyObject *__pyx_n_s_recursive; - PyObject *__pyx_n_s_relative_to; - PyObject *__pyx_n_s_release; - PyObject *__pyx_n_s_repeat; - PyObject *__pyx_n_s_reshape; - PyObject *__pyx_n_s_resize; - PyObject *__pyx_n_s_resolve; - PyObject *__pyx_n_s_ret_val; - PyObject *__pyx_n_s_rglob; - PyObject *__pyx_n_s_rmtree; - PyObject *__pyx_n_s_rotation; - PyObject *__pyx_n_s_rsplit; - PyObject *__pyx_n_s_s; - PyObject *__pyx_n_s_sa; - PyObject *__pyx_n_s_sb; - PyObject *__pyx_n_s_seed; - PyObject *__pyx_n_s_segments; - PyObject *__pyx_n_s_segments4; - PyObject *__pyx_n_s_segments9; - PyObject *__pyx_n_s_self; - PyObject *__pyx_n_s_send; - PyObject *__pyx_n_s_sep; - PyObject *__pyx_n_s_set_name; - PyObject *__pyx_n_s_shape; - PyObject *__pyx_n_s_shuffle; - PyObject *__pyx_n_s_shutil; - PyObject *__pyx_n_s_size; - PyObject *__pyx_n_s_spec; - PyObject *__pyx_n_s_split; - PyObject *__pyx_n_s_splitlines; - PyObject *__pyx_n_s_stem; - PyObject *__pyx_n_s_stride; - PyObject *__pyx_n_s_strip; - PyObject *__pyx_n_s_suffix; - PyObject *__pyx_n_s_sum; - PyObject *__pyx_n_s_super; - PyObject *__pyx_n_s_test; - PyObject *__pyx_n_s_threading; - PyObject *__pyx_n_s_throw; - PyObject *__pyx_n_u_tif; - PyObject *__pyx_n_u_tiff; - PyObject *__pyx_n_s_time; - PyObject *__pyx_n_s_tobytes; - PyObject *__pyx_n_s_torch; - PyObject *__pyx_n_s_torch_nn_functional; - PyObject *__pyx_n_s_total; - PyObject *__pyx_n_s_tqdm; - PyObject *__pyx_n_s_transpose; - PyObject *__pyx_kp_u_txt; - PyObject *__pyx_n_s_txt_2; - PyObject *__pyx_n_s_uint8; - PyObject *__pyx_n_s_uniform; - PyObject *__pyx_n_s_unlink; - PyObject *__pyx_n_s_update; - PyObject *__pyx_kp_u_using_txt_labeled_images_only; - PyObject *__pyx_n_u_video; - PyObject *__pyx_kp_u_video_2; - PyObject *__pyx_n_s_video_flag; - PyObject *__pyx_kp_u_videos; - PyObject *__pyx_n_s_videos_2; - PyObject *__pyx_n_s_w; - PyObject *__pyx_n_s_w0; - PyObject *__pyx_n_u_webp; - PyObject *__pyx_n_s_weights; - PyObject *__pyx_n_u_wmv; - PyObject *__pyx_n_s_wp; - PyObject *__pyx_n_s_write; - PyObject *__pyx_n_s_x; - PyObject *__pyx_n_s_x1; - PyObject *__pyx_n_s_x1a; - PyObject *__pyx_n_s_x1b; - PyObject *__pyx_n_s_x2; - PyObject *__pyx_n_s_x2a; - PyObject *__pyx_n_s_x2b; - PyObject *__pyx_n_s_xc; - PyObject *__pyx_n_s_xyn2xy; - PyObject *__pyx_n_s_xywh2xyxy; - PyObject *__pyx_n_s_xywhn2xyxy; - PyObject *__pyx_n_s_y1; - PyObject *__pyx_n_s_y1a; - PyObject *__pyx_n_s_y1b; - PyObject *__pyx_n_s_y2; - PyObject *__pyx_n_s_y2a; - PyObject *__pyx_n_s_y2b; - PyObject *__pyx_n_s_yaml; - PyObject *__pyx_n_s_yc; - PyObject *__pyx_n_s_zip; - PyObject *__pyx_n_s_zipfile; - PyObject *__pyx_float_0_0; - PyObject *__pyx_float_0_1; - PyObject *__pyx_float_0_9; - PyObject *__pyx_float_1_2; - PyObject *__pyx_int_0; - PyObject *__pyx_int_1; - PyObject *__pyx_int_2; - PyObject *__pyx_int_3; - PyObject *__pyx_int_4; - PyObject *__pyx_int_5; - PyObject *__pyx_int_6; - PyObject *__pyx_int_7; - PyObject *__pyx_int_8; - PyObject *__pyx_int_32; - PyObject *__pyx_int_114; - PyObject *__pyx_int_274; - PyObject *__pyx_int_640; - PyObject *__pyx_int_neg_1; - PyObject *__pyx_tuple__2; - PyObject *__pyx_slice__12; - PyObject *__pyx_slice__14; - PyObject *__pyx_slice__15; - PyObject *__pyx_slice__16; - PyObject *__pyx_slice__22; - PyObject *__pyx_tuple__11; - PyObject *__pyx_tuple__13; - PyObject *__pyx_tuple__17; - PyObject *__pyx_tuple__19; - PyObject *__pyx_tuple__20; - PyObject *__pyx_tuple__23; - PyObject *__pyx_tuple__24; - PyObject *__pyx_tuple__27; - PyObject *__pyx_tuple__28; - PyObject *__pyx_tuple__30; - PyObject *__pyx_tuple__32; - PyObject *__pyx_tuple__34; - PyObject *__pyx_tuple__36; - PyObject *__pyx_tuple__37; - PyObject *__pyx_tuple__39; - PyObject *__pyx_tuple__41; - PyObject *__pyx_tuple__44; - PyObject *__pyx_tuple__46; - PyObject *__pyx_tuple__48; - PyObject *__pyx_tuple__50; - PyObject *__pyx_tuple__52; - PyObject *__pyx_tuple__54; - PyObject *__pyx_tuple__55; - PyObject *__pyx_tuple__57; - PyObject *__pyx_tuple__58; - PyObject *__pyx_tuple__60; - PyObject *__pyx_tuple__61; - PyObject *__pyx_tuple__63; - PyObject *__pyx_codeobj__29; - PyObject *__pyx_codeobj__31; - PyObject *__pyx_codeobj__33; - PyObject *__pyx_codeobj__35; - PyObject *__pyx_codeobj__38; - PyObject *__pyx_codeobj__40; - PyObject *__pyx_codeobj__42; - PyObject *__pyx_codeobj__43; - PyObject *__pyx_codeobj__45; - PyObject *__pyx_codeobj__47; - PyObject *__pyx_codeobj__49; - PyObject *__pyx_codeobj__51; - PyObject *__pyx_codeobj__53; - PyObject *__pyx_codeobj__56; - PyObject *__pyx_codeobj__59; - PyObject *__pyx_codeobj__62; -} __pyx_mstate; - -#ifdef __cplusplus -namespace { - extern struct PyModuleDef __pyx_moduledef; -} /* anonymous namespace */ -#else -static struct PyModuleDef __pyx_moduledef; -#endif - -#define __pyx_mstate(o) ((__pyx_mstate *)__Pyx_PyModule_GetState(o)) - -#define __pyx_mstate_global (__pyx_mstate(PyState_FindModule(&__pyx_moduledef))) - -#define __pyx_m (PyState_FindModule(&__pyx_moduledef)) -#endif -/* #### Code section: module_state_clear ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_clear(PyObject *m) { - __pyx_mstate *clear_module_state = __pyx_mstate(m); - if (!clear_module_state) return 0; - Py_CLEAR(clear_module_state->__pyx_d); - Py_CLEAR(clear_module_state->__pyx_b); - Py_CLEAR(clear_module_state->__pyx_cython_runtime); - Py_CLEAR(clear_module_state->__pyx_empty_tuple); - Py_CLEAR(clear_module_state->__pyx_empty_bytes); - Py_CLEAR(clear_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_CLEAR(clear_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_CLEAR(clear_module_state->__pyx_FusedFunctionType); - #endif - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit); - Py_CLEAR(clear_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr); - Py_CLEAR(clear_module_state->__pyx_kp_u_); - Py_CLEAR(clear_module_state->__pyx_n_s_AssertionError); - Py_CLEAR(clear_module_state->__pyx_kp_u_Autosplitting_images_from); - Py_CLEAR(clear_module_state->__pyx_n_s_CAP_PROP_FRAME_COUNT); - Py_CLEAR(clear_module_state->__pyx_kp_u_ERROR); - Py_CLEAR(clear_module_state->__pyx_n_s_ExifTags); - Py_CLEAR(clear_module_state->__pyx_n_s_F); - Py_CLEAR(clear_module_state->__pyx_n_s_FLIP_LEFT_RIGHT); - Py_CLEAR(clear_module_state->__pyx_n_s_FLIP_TOP_BOTTOM); - Py_CLEAR(clear_module_state->__pyx_n_s_HELP_URL); - Py_CLEAR(clear_module_state->__pyx_n_s_IMG_FORMATS); - Py_CLEAR(clear_module_state->__pyx_n_s_INTER_AREA); - Py_CLEAR(clear_module_state->__pyx_n_s_INTER_LINEAR); - Py_CLEAR(clear_module_state->__pyx_n_s_Image); - Py_CLEAR(clear_module_state->__pyx_n_s_ImageOps); - Py_CLEAR(clear_module_state->__pyx_kp_u_Image_Not_Found); - Py_CLEAR(clear_module_state->__pyx_n_s_LoadImages); - Py_CLEAR(clear_module_state->__pyx_n_s_LoadImages___init); - Py_CLEAR(clear_module_state->__pyx_n_s_LoadImages___iter); - Py_CLEAR(clear_module_state->__pyx_n_s_LoadImages___len); - Py_CLEAR(clear_module_state->__pyx_n_s_LoadImages___next); - Py_CLEAR(clear_module_state->__pyx_n_s_LoadImages_new_video); - Py_CLEAR(clear_module_state->__pyx_kp_u_No_images_or_videos_found_in); - Py_CLEAR(clear_module_state->__pyx_n_u_Orientation); - Py_CLEAR(clear_module_state->__pyx_n_s_PIL); - Py_CLEAR(clear_module_state->__pyx_n_s_Path); - Py_CLEAR(clear_module_state->__pyx_n_s_Pool); - Py_CLEAR(clear_module_state->__pyx_n_s_ROTATE_180); - Py_CLEAR(clear_module_state->__pyx_n_s_ROTATE_270); - Py_CLEAR(clear_module_state->__pyx_n_s_ROTATE_90); - Py_CLEAR(clear_module_state->__pyx_n_s_StopIteration); - Py_CLEAR(clear_module_state->__pyx_kp_u_Supported_formats_are_images); - Py_CLEAR(clear_module_state->__pyx_n_s_TAGS); - Py_CLEAR(clear_module_state->__pyx_n_s_TRANSPOSE); - Py_CLEAR(clear_module_state->__pyx_n_s_TRANSVERSE); - Py_CLEAR(clear_module_state->__pyx_n_s_Thread); - Py_CLEAR(clear_module_state->__pyx_n_s_ThreadPool); - Py_CLEAR(clear_module_state->__pyx_n_s_VID_FORMATS); - Py_CLEAR(clear_module_state->__pyx_n_s_VideoCapture); - Py_CLEAR(clear_module_state->__pyx_n_s_ZipFile); - Py_CLEAR(clear_module_state->__pyx_kp_u__10); - Py_CLEAR(clear_module_state->__pyx_kp_u__18); - Py_CLEAR(clear_module_state->__pyx_n_s__21); - Py_CLEAR(clear_module_state->__pyx_n_u__21); - Py_CLEAR(clear_module_state->__pyx_kp_u__25); - Py_CLEAR(clear_module_state->__pyx_kp_u__26); - Py_CLEAR(clear_module_state->__pyx_n_s__3); - Py_CLEAR(clear_module_state->__pyx_kp_u__3); - Py_CLEAR(clear_module_state->__pyx_kp_u__4); - Py_CLEAR(clear_module_state->__pyx_kp_u__5); - Py_CLEAR(clear_module_state->__pyx_kp_u__6); - Py_CLEAR(clear_module_state->__pyx_n_s__64); - Py_CLEAR(clear_module_state->__pyx_kp_u__7); - Py_CLEAR(clear_module_state->__pyx_kp_u__8); - Py_CLEAR(clear_module_state->__pyx_kp_u__9); - Py_CLEAR(clear_module_state->__pyx_n_u_a); - Py_CLEAR(clear_module_state->__pyx_n_s_annotated_only); - Py_CLEAR(clear_module_state->__pyx_n_s_any); - Py_CLEAR(clear_module_state->__pyx_n_s_append); - Py_CLEAR(clear_module_state->__pyx_n_s_args); - Py_CLEAR(clear_module_state->__pyx_n_s_array); - Py_CLEAR(clear_module_state->__pyx_n_s_as_posix); - Py_CLEAR(clear_module_state->__pyx_n_s_ascontiguousarray); - Py_CLEAR(clear_module_state->__pyx_n_u_asf); - Py_CLEAR(clear_module_state->__pyx_n_s_astype); - Py_CLEAR(clear_module_state->__pyx_n_s_asyncio_coroutines); - Py_CLEAR(clear_module_state->__pyx_n_s_augment); - Py_CLEAR(clear_module_state->__pyx_n_s_auto); - Py_CLEAR(clear_module_state->__pyx_n_s_autosplit); - Py_CLEAR(clear_module_state->__pyx_n_s_autosplit_locals_genexpr); - Py_CLEAR(clear_module_state->__pyx_kp_u_autosplit_test_txt); - Py_CLEAR(clear_module_state->__pyx_kp_u_autosplit_train_txt); - Py_CLEAR(clear_module_state->__pyx_kp_u_autosplit_val_txt); - Py_CLEAR(clear_module_state->__pyx_n_u_avi); - Py_CLEAR(clear_module_state->__pyx_n_s_b); - Py_CLEAR(clear_module_state->__pyx_n_u_bmp); - Py_CLEAR(clear_module_state->__pyx_kp_u_box_failure_in); - Py_CLEAR(clear_module_state->__pyx_n_s_c); - Py_CLEAR(clear_module_state->__pyx_n_s_cap); - Py_CLEAR(clear_module_state->__pyx_n_s_choices); - Py_CLEAR(clear_module_state->__pyx_n_s_class_getitem); - Py_CLEAR(clear_module_state->__pyx_n_u_classifier); - Py_CLEAR(clear_module_state->__pyx_n_s_cline_in_traceback); - Py_CLEAR(clear_module_state->__pyx_n_s_clip); - Py_CLEAR(clear_module_state->__pyx_n_s_close); - Py_CLEAR(clear_module_state->__pyx_n_s_concatenate); - Py_CLEAR(clear_module_state->__pyx_n_s_copy); - Py_CLEAR(clear_module_state->__pyx_n_s_copyfile); - Py_CLEAR(clear_module_state->__pyx_n_s_count); - Py_CLEAR(clear_module_state->__pyx_n_s_create_folder); - Py_CLEAR(clear_module_state->__pyx_n_s_cv2); - Py_CLEAR(clear_module_state->__pyx_kp_u_datasets_coco128); - Py_CLEAR(clear_module_state->__pyx_kp_u_datasets_coco128_images); - Py_CLEAR(clear_module_state->__pyx_n_s_dict); - Py_CLEAR(clear_module_state->__pyx_kp_u_disable); - Py_CLEAR(clear_module_state->__pyx_n_u_dng); - Py_CLEAR(clear_module_state->__pyx_n_s_doc); - Py_CLEAR(clear_module_state->__pyx_kp_u_does_not_exist); - Py_CLEAR(clear_module_state->__pyx_n_s_dtype); - Py_CLEAR(clear_module_state->__pyx_kp_u_enable); - Py_CLEAR(clear_module_state->__pyx_n_s_encode); - Py_CLEAR(clear_module_state->__pyx_n_s_enter); - Py_CLEAR(clear_module_state->__pyx_n_s_enumerate); - Py_CLEAR(clear_module_state->__pyx_n_s_exif); - Py_CLEAR(clear_module_state->__pyx_n_u_exif); - Py_CLEAR(clear_module_state->__pyx_n_s_exif_size); - Py_CLEAR(clear_module_state->__pyx_n_s_exif_transpose); - Py_CLEAR(clear_module_state->__pyx_n_s_exists); - Py_CLEAR(clear_module_state->__pyx_n_s_exit); - Py_CLEAR(clear_module_state->__pyx_n_s_extract_boxes); - Py_CLEAR(clear_module_state->__pyx_n_s_f); - Py_CLEAR(clear_module_state->__pyx_n_s_file); - Py_CLEAR(clear_module_state->__pyx_n_s_files); - Py_CLEAR(clear_module_state->__pyx_n_u_flat); - Py_CLEAR(clear_module_state->__pyx_n_s_flatten_recursive); - Py_CLEAR(clear_module_state->__pyx_n_s_float32); - Py_CLEAR(clear_module_state->__pyx_n_s_frame); - Py_CLEAR(clear_module_state->__pyx_n_s_frames); - Py_CLEAR(clear_module_state->__pyx_n_s_full); - Py_CLEAR(clear_module_state->__pyx_n_s_functional); - Py_CLEAR(clear_module_state->__pyx_kp_u_gc); - Py_CLEAR(clear_module_state->__pyx_n_s_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_get); - Py_CLEAR(clear_module_state->__pyx_n_s_get_hash); - Py_CLEAR(clear_module_state->__pyx_n_s_get_hash_locals_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_getexif); - Py_CLEAR(clear_module_state->__pyx_n_s_getexif_2); - Py_CLEAR(clear_module_state->__pyx_n_s_getsize); - Py_CLEAR(clear_module_state->__pyx_n_u_gif); - Py_CLEAR(clear_module_state->__pyx_n_s_glob); - Py_CLEAR(clear_module_state->__pyx_n_s_h); - Py_CLEAR(clear_module_state->__pyx_n_s_h0); - Py_CLEAR(clear_module_state->__pyx_n_s_hashlib); - Py_CLEAR(clear_module_state->__pyx_n_s_hexdigest); - Py_CLEAR(clear_module_state->__pyx_n_s_hp); - Py_CLEAR(clear_module_state->__pyx_kp_u_https_github_com_ultralytics_yol); - Py_CLEAR(clear_module_state->__pyx_n_s_i); - Py_CLEAR(clear_module_state->__pyx_n_s_im); - Py_CLEAR(clear_module_state->__pyx_n_s_im_file); - Py_CLEAR(clear_module_state->__pyx_n_s_image); - Py_CLEAR(clear_module_state->__pyx_n_u_image); - Py_CLEAR(clear_module_state->__pyx_kp_u_image_2); - Py_CLEAR(clear_module_state->__pyx_n_s_images); - Py_CLEAR(clear_module_state->__pyx_n_u_images); - Py_CLEAR(clear_module_state->__pyx_n_s_img); - Py_CLEAR(clear_module_state->__pyx_n_s_img0); - Py_CLEAR(clear_module_state->__pyx_n_s_img2label_paths); - Py_CLEAR(clear_module_state->__pyx_n_s_img4); - Py_CLEAR(clear_module_state->__pyx_n_s_img9); - Py_CLEAR(clear_module_state->__pyx_n_s_img_files); - Py_CLEAR(clear_module_state->__pyx_n_s_img_hw); - Py_CLEAR(clear_module_state->__pyx_n_s_img_hw0); - Py_CLEAR(clear_module_state->__pyx_n_s_img_npy); - Py_CLEAR(clear_module_state->__pyx_n_s_img_paths); - Py_CLEAR(clear_module_state->__pyx_n_s_img_size); - Py_CLEAR(clear_module_state->__pyx_n_s_imgs); - Py_CLEAR(clear_module_state->__pyx_n_s_import); - Py_CLEAR(clear_module_state->__pyx_n_s_imread); - Py_CLEAR(clear_module_state->__pyx_n_s_imwrite); - Py_CLEAR(clear_module_state->__pyx_n_s_index); - Py_CLEAR(clear_module_state->__pyx_n_s_indices); - Py_CLEAR(clear_module_state->__pyx_n_s_info); - Py_CLEAR(clear_module_state->__pyx_n_s_init); - Py_CLEAR(clear_module_state->__pyx_n_s_init_subclass); - Py_CLEAR(clear_module_state->__pyx_n_s_initializing); - Py_CLEAR(clear_module_state->__pyx_n_s_int); - Py_CLEAR(clear_module_state->__pyx_n_s_interpolation); - Py_CLEAR(clear_module_state->__pyx_n_s_is_coroutine); - Py_CLEAR(clear_module_state->__pyx_n_s_is_dir); - Py_CLEAR(clear_module_state->__pyx_n_s_isdir); - Py_CLEAR(clear_module_state->__pyx_kp_u_isenabled); - Py_CLEAR(clear_module_state->__pyx_n_s_isfile); - Py_CLEAR(clear_module_state->__pyx_n_s_items); - Py_CLEAR(clear_module_state->__pyx_n_s_iter); - Py_CLEAR(clear_module_state->__pyx_n_s_itertools); - Py_CLEAR(clear_module_state->__pyx_n_s_j); - Py_CLEAR(clear_module_state->__pyx_n_s_join); - Py_CLEAR(clear_module_state->__pyx_n_u_jpeg); - Py_CLEAR(clear_module_state->__pyx_kp_u_jpg); - Py_CLEAR(clear_module_state->__pyx_n_u_jpg_2); - Py_CLEAR(clear_module_state->__pyx_n_s_json); - Py_CLEAR(clear_module_state->__pyx_n_s_k); - Py_CLEAR(clear_module_state->__pyx_n_s_keys); - Py_CLEAR(clear_module_state->__pyx_n_s_labels); - Py_CLEAR(clear_module_state->__pyx_n_u_labels); - Py_CLEAR(clear_module_state->__pyx_n_s_labels4); - Py_CLEAR(clear_module_state->__pyx_n_s_labels9); - Py_CLEAR(clear_module_state->__pyx_n_s_lb); - Py_CLEAR(clear_module_state->__pyx_n_s_lb_file); - Py_CLEAR(clear_module_state->__pyx_n_s_len); - Py_CLEAR(clear_module_state->__pyx_n_s_letterbox); - Py_CLEAR(clear_module_state->__pyx_n_s_load); - Py_CLEAR(clear_module_state->__pyx_n_s_load_image); - Py_CLEAR(clear_module_state->__pyx_n_s_load_mosaic); - Py_CLEAR(clear_module_state->__pyx_n_s_load_mosaic9); - Py_CLEAR(clear_module_state->__pyx_n_s_load_mosaic9_locals_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_load_mosaic_locals_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_lower); - Py_CLEAR(clear_module_state->__pyx_n_u_m4v); - Py_CLEAR(clear_module_state->__pyx_n_s_main); - Py_CLEAR(clear_module_state->__pyx_n_s_makedirs); - Py_CLEAR(clear_module_state->__pyx_n_s_math); - Py_CLEAR(clear_module_state->__pyx_n_s_md5); - Py_CLEAR(clear_module_state->__pyx_n_s_metaclass); - Py_CLEAR(clear_module_state->__pyx_n_s_method); - Py_CLEAR(clear_module_state->__pyx_n_s_missing_ok); - Py_CLEAR(clear_module_state->__pyx_n_s_mkdir); - Py_CLEAR(clear_module_state->__pyx_n_u_mkv); - Py_CLEAR(clear_module_state->__pyx_n_s_mode); - Py_CLEAR(clear_module_state->__pyx_n_s_module); - Py_CLEAR(clear_module_state->__pyx_n_s_mosaic_border); - Py_CLEAR(clear_module_state->__pyx_n_u_mov); - Py_CLEAR(clear_module_state->__pyx_n_u_mp4); - Py_CLEAR(clear_module_state->__pyx_n_u_mpeg); - Py_CLEAR(clear_module_state->__pyx_n_u_mpg); - Py_CLEAR(clear_module_state->__pyx_n_u_mpo); - Py_CLEAR(clear_module_state->__pyx_n_s_multiprocessing_pool); - Py_CLEAR(clear_module_state->__pyx_n_s_n); - Py_CLEAR(clear_module_state->__pyx_n_s_name); - Py_CLEAR(clear_module_state->__pyx_n_s_name_2); - Py_CLEAR(clear_module_state->__pyx_kp_u_new); - Py_CLEAR(clear_module_state->__pyx_n_s_new_path); - Py_CLEAR(clear_module_state->__pyx_n_s_new_video); - Py_CLEAR(clear_module_state->__pyx_n_s_next); - Py_CLEAR(clear_module_state->__pyx_n_s_nf); - Py_CLEAR(clear_module_state->__pyx_n_s_ni); - Py_CLEAR(clear_module_state->__pyx_n_s_nn); - Py_CLEAR(clear_module_state->__pyx_n_s_np); - Py_CLEAR(clear_module_state->__pyx_n_s_npy); - Py_CLEAR(clear_module_state->__pyx_n_s_numpy); - Py_CLEAR(clear_module_state->__pyx_n_s_nv); - Py_CLEAR(clear_module_state->__pyx_n_s_open); - Py_CLEAR(clear_module_state->__pyx_n_s_orientation); - Py_CLEAR(clear_module_state->__pyx_n_s_os); - Py_CLEAR(clear_module_state->__pyx_n_s_out); - Py_CLEAR(clear_module_state->__pyx_n_s_p); - Py_CLEAR(clear_module_state->__pyx_n_s_padh); - Py_CLEAR(clear_module_state->__pyx_n_s_padw); - Py_CLEAR(clear_module_state->__pyx_n_s_padx); - Py_CLEAR(clear_module_state->__pyx_n_s_pady); - Py_CLEAR(clear_module_state->__pyx_n_s_parent); - Py_CLEAR(clear_module_state->__pyx_n_s_parents); - Py_CLEAR(clear_module_state->__pyx_n_s_path); - Py_CLEAR(clear_module_state->__pyx_n_s_pathlib); - Py_CLEAR(clear_module_state->__pyx_n_s_paths); - Py_CLEAR(clear_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils); - Py_CLEAR(clear_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2); - Py_CLEAR(clear_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3); - Py_CLEAR(clear_module_state->__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4); - Py_CLEAR(clear_module_state->__pyx_n_u_png); - Py_CLEAR(clear_module_state->__pyx_n_s_prepare); - Py_CLEAR(clear_module_state->__pyx_n_s_print); - Py_CLEAR(clear_module_state->__pyx_n_s_qualname); - Py_CLEAR(clear_module_state->__pyx_n_s_r); - Py_CLEAR(clear_module_state->__pyx_n_s_random); - Py_CLEAR(clear_module_state->__pyx_n_s_ravel); - Py_CLEAR(clear_module_state->__pyx_n_s_read); - Py_CLEAR(clear_module_state->__pyx_n_s_recursive); - Py_CLEAR(clear_module_state->__pyx_n_s_relative_to); - Py_CLEAR(clear_module_state->__pyx_n_s_release); - Py_CLEAR(clear_module_state->__pyx_n_s_repeat); - Py_CLEAR(clear_module_state->__pyx_n_s_reshape); - Py_CLEAR(clear_module_state->__pyx_n_s_resize); - Py_CLEAR(clear_module_state->__pyx_n_s_resolve); - Py_CLEAR(clear_module_state->__pyx_n_s_ret_val); - Py_CLEAR(clear_module_state->__pyx_n_s_rglob); - Py_CLEAR(clear_module_state->__pyx_n_s_rmtree); - Py_CLEAR(clear_module_state->__pyx_n_s_rotation); - Py_CLEAR(clear_module_state->__pyx_n_s_rsplit); - Py_CLEAR(clear_module_state->__pyx_n_s_s); - Py_CLEAR(clear_module_state->__pyx_n_s_sa); - Py_CLEAR(clear_module_state->__pyx_n_s_sb); - Py_CLEAR(clear_module_state->__pyx_n_s_seed); - Py_CLEAR(clear_module_state->__pyx_n_s_segments); - Py_CLEAR(clear_module_state->__pyx_n_s_segments4); - Py_CLEAR(clear_module_state->__pyx_n_s_segments9); - Py_CLEAR(clear_module_state->__pyx_n_s_self); - Py_CLEAR(clear_module_state->__pyx_n_s_send); - Py_CLEAR(clear_module_state->__pyx_n_s_sep); - Py_CLEAR(clear_module_state->__pyx_n_s_set_name); - Py_CLEAR(clear_module_state->__pyx_n_s_shape); - Py_CLEAR(clear_module_state->__pyx_n_s_shuffle); - Py_CLEAR(clear_module_state->__pyx_n_s_shutil); - Py_CLEAR(clear_module_state->__pyx_n_s_size); - Py_CLEAR(clear_module_state->__pyx_n_s_spec); - Py_CLEAR(clear_module_state->__pyx_n_s_split); - Py_CLEAR(clear_module_state->__pyx_n_s_splitlines); - Py_CLEAR(clear_module_state->__pyx_n_s_stem); - Py_CLEAR(clear_module_state->__pyx_n_s_stride); - Py_CLEAR(clear_module_state->__pyx_n_s_strip); - Py_CLEAR(clear_module_state->__pyx_n_s_suffix); - Py_CLEAR(clear_module_state->__pyx_n_s_sum); - Py_CLEAR(clear_module_state->__pyx_n_s_super); - Py_CLEAR(clear_module_state->__pyx_n_s_test); - Py_CLEAR(clear_module_state->__pyx_n_s_threading); - Py_CLEAR(clear_module_state->__pyx_n_s_throw); - Py_CLEAR(clear_module_state->__pyx_n_u_tif); - Py_CLEAR(clear_module_state->__pyx_n_u_tiff); - Py_CLEAR(clear_module_state->__pyx_n_s_time); - Py_CLEAR(clear_module_state->__pyx_n_s_tobytes); - Py_CLEAR(clear_module_state->__pyx_n_s_torch); - Py_CLEAR(clear_module_state->__pyx_n_s_torch_nn_functional); - Py_CLEAR(clear_module_state->__pyx_n_s_total); - Py_CLEAR(clear_module_state->__pyx_n_s_tqdm); - Py_CLEAR(clear_module_state->__pyx_n_s_transpose); - Py_CLEAR(clear_module_state->__pyx_kp_u_txt); - Py_CLEAR(clear_module_state->__pyx_n_s_txt_2); - Py_CLEAR(clear_module_state->__pyx_n_s_uint8); - Py_CLEAR(clear_module_state->__pyx_n_s_uniform); - Py_CLEAR(clear_module_state->__pyx_n_s_unlink); - Py_CLEAR(clear_module_state->__pyx_n_s_update); - Py_CLEAR(clear_module_state->__pyx_kp_u_using_txt_labeled_images_only); - Py_CLEAR(clear_module_state->__pyx_n_u_video); - Py_CLEAR(clear_module_state->__pyx_kp_u_video_2); - Py_CLEAR(clear_module_state->__pyx_n_s_video_flag); - Py_CLEAR(clear_module_state->__pyx_kp_u_videos); - Py_CLEAR(clear_module_state->__pyx_n_s_videos_2); - Py_CLEAR(clear_module_state->__pyx_n_s_w); - Py_CLEAR(clear_module_state->__pyx_n_s_w0); - Py_CLEAR(clear_module_state->__pyx_n_u_webp); - Py_CLEAR(clear_module_state->__pyx_n_s_weights); - Py_CLEAR(clear_module_state->__pyx_n_u_wmv); - Py_CLEAR(clear_module_state->__pyx_n_s_wp); - Py_CLEAR(clear_module_state->__pyx_n_s_write); - Py_CLEAR(clear_module_state->__pyx_n_s_x); - Py_CLEAR(clear_module_state->__pyx_n_s_x1); - Py_CLEAR(clear_module_state->__pyx_n_s_x1a); - Py_CLEAR(clear_module_state->__pyx_n_s_x1b); - Py_CLEAR(clear_module_state->__pyx_n_s_x2); - Py_CLEAR(clear_module_state->__pyx_n_s_x2a); - Py_CLEAR(clear_module_state->__pyx_n_s_x2b); - Py_CLEAR(clear_module_state->__pyx_n_s_xc); - Py_CLEAR(clear_module_state->__pyx_n_s_xyn2xy); - Py_CLEAR(clear_module_state->__pyx_n_s_xywh2xyxy); - Py_CLEAR(clear_module_state->__pyx_n_s_xywhn2xyxy); - Py_CLEAR(clear_module_state->__pyx_n_s_y1); - Py_CLEAR(clear_module_state->__pyx_n_s_y1a); - Py_CLEAR(clear_module_state->__pyx_n_s_y1b); - Py_CLEAR(clear_module_state->__pyx_n_s_y2); - Py_CLEAR(clear_module_state->__pyx_n_s_y2a); - Py_CLEAR(clear_module_state->__pyx_n_s_y2b); - Py_CLEAR(clear_module_state->__pyx_n_s_yaml); - Py_CLEAR(clear_module_state->__pyx_n_s_yc); - Py_CLEAR(clear_module_state->__pyx_n_s_zip); - Py_CLEAR(clear_module_state->__pyx_n_s_zipfile); - Py_CLEAR(clear_module_state->__pyx_float_0_0); - Py_CLEAR(clear_module_state->__pyx_float_0_1); - Py_CLEAR(clear_module_state->__pyx_float_0_9); - Py_CLEAR(clear_module_state->__pyx_float_1_2); - Py_CLEAR(clear_module_state->__pyx_int_0); - Py_CLEAR(clear_module_state->__pyx_int_1); - Py_CLEAR(clear_module_state->__pyx_int_2); - Py_CLEAR(clear_module_state->__pyx_int_3); - Py_CLEAR(clear_module_state->__pyx_int_4); - Py_CLEAR(clear_module_state->__pyx_int_5); - Py_CLEAR(clear_module_state->__pyx_int_6); - Py_CLEAR(clear_module_state->__pyx_int_7); - Py_CLEAR(clear_module_state->__pyx_int_8); - Py_CLEAR(clear_module_state->__pyx_int_32); - Py_CLEAR(clear_module_state->__pyx_int_114); - Py_CLEAR(clear_module_state->__pyx_int_274); - Py_CLEAR(clear_module_state->__pyx_int_640); - Py_CLEAR(clear_module_state->__pyx_int_neg_1); - Py_CLEAR(clear_module_state->__pyx_tuple__2); - Py_CLEAR(clear_module_state->__pyx_slice__12); - Py_CLEAR(clear_module_state->__pyx_slice__14); - Py_CLEAR(clear_module_state->__pyx_slice__15); - Py_CLEAR(clear_module_state->__pyx_slice__16); - Py_CLEAR(clear_module_state->__pyx_slice__22); - Py_CLEAR(clear_module_state->__pyx_tuple__11); - Py_CLEAR(clear_module_state->__pyx_tuple__13); - Py_CLEAR(clear_module_state->__pyx_tuple__17); - Py_CLEAR(clear_module_state->__pyx_tuple__19); - Py_CLEAR(clear_module_state->__pyx_tuple__20); - Py_CLEAR(clear_module_state->__pyx_tuple__23); - Py_CLEAR(clear_module_state->__pyx_tuple__24); - Py_CLEAR(clear_module_state->__pyx_tuple__27); - Py_CLEAR(clear_module_state->__pyx_tuple__28); - Py_CLEAR(clear_module_state->__pyx_tuple__30); - Py_CLEAR(clear_module_state->__pyx_tuple__32); - Py_CLEAR(clear_module_state->__pyx_tuple__34); - Py_CLEAR(clear_module_state->__pyx_tuple__36); - Py_CLEAR(clear_module_state->__pyx_tuple__37); - Py_CLEAR(clear_module_state->__pyx_tuple__39); - Py_CLEAR(clear_module_state->__pyx_tuple__41); - Py_CLEAR(clear_module_state->__pyx_tuple__44); - Py_CLEAR(clear_module_state->__pyx_tuple__46); - Py_CLEAR(clear_module_state->__pyx_tuple__48); - Py_CLEAR(clear_module_state->__pyx_tuple__50); - Py_CLEAR(clear_module_state->__pyx_tuple__52); - Py_CLEAR(clear_module_state->__pyx_tuple__54); - Py_CLEAR(clear_module_state->__pyx_tuple__55); - Py_CLEAR(clear_module_state->__pyx_tuple__57); - Py_CLEAR(clear_module_state->__pyx_tuple__58); - Py_CLEAR(clear_module_state->__pyx_tuple__60); - Py_CLEAR(clear_module_state->__pyx_tuple__61); - Py_CLEAR(clear_module_state->__pyx_tuple__63); - Py_CLEAR(clear_module_state->__pyx_codeobj__29); - Py_CLEAR(clear_module_state->__pyx_codeobj__31); - Py_CLEAR(clear_module_state->__pyx_codeobj__33); - Py_CLEAR(clear_module_state->__pyx_codeobj__35); - Py_CLEAR(clear_module_state->__pyx_codeobj__38); - Py_CLEAR(clear_module_state->__pyx_codeobj__40); - Py_CLEAR(clear_module_state->__pyx_codeobj__42); - Py_CLEAR(clear_module_state->__pyx_codeobj__43); - Py_CLEAR(clear_module_state->__pyx_codeobj__45); - Py_CLEAR(clear_module_state->__pyx_codeobj__47); - Py_CLEAR(clear_module_state->__pyx_codeobj__49); - Py_CLEAR(clear_module_state->__pyx_codeobj__51); - Py_CLEAR(clear_module_state->__pyx_codeobj__53); - Py_CLEAR(clear_module_state->__pyx_codeobj__56); - Py_CLEAR(clear_module_state->__pyx_codeobj__59); - Py_CLEAR(clear_module_state->__pyx_codeobj__62); - return 0; -} -#endif -/* #### Code section: module_state_traverse ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_traverse(PyObject *m, visitproc visit, void *arg) { - __pyx_mstate *traverse_module_state = __pyx_mstate(m); - if (!traverse_module_state) return 0; - Py_VISIT(traverse_module_state->__pyx_d); - Py_VISIT(traverse_module_state->__pyx_b); - Py_VISIT(traverse_module_state->__pyx_cython_runtime); - Py_VISIT(traverse_module_state->__pyx_empty_tuple); - Py_VISIT(traverse_module_state->__pyx_empty_bytes); - Py_VISIT(traverse_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_VISIT(traverse_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_VISIT(traverse_module_state->__pyx_FusedFunctionType); - #endif - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit); - Py_VISIT(traverse_module_state->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr); - Py_VISIT(traverse_module_state->__pyx_kp_u_); - Py_VISIT(traverse_module_state->__pyx_n_s_AssertionError); - Py_VISIT(traverse_module_state->__pyx_kp_u_Autosplitting_images_from); - Py_VISIT(traverse_module_state->__pyx_n_s_CAP_PROP_FRAME_COUNT); - Py_VISIT(traverse_module_state->__pyx_kp_u_ERROR); - Py_VISIT(traverse_module_state->__pyx_n_s_ExifTags); - Py_VISIT(traverse_module_state->__pyx_n_s_F); - Py_VISIT(traverse_module_state->__pyx_n_s_FLIP_LEFT_RIGHT); - Py_VISIT(traverse_module_state->__pyx_n_s_FLIP_TOP_BOTTOM); - Py_VISIT(traverse_module_state->__pyx_n_s_HELP_URL); - Py_VISIT(traverse_module_state->__pyx_n_s_IMG_FORMATS); - Py_VISIT(traverse_module_state->__pyx_n_s_INTER_AREA); - Py_VISIT(traverse_module_state->__pyx_n_s_INTER_LINEAR); - Py_VISIT(traverse_module_state->__pyx_n_s_Image); - Py_VISIT(traverse_module_state->__pyx_n_s_ImageOps); - Py_VISIT(traverse_module_state->__pyx_kp_u_Image_Not_Found); - Py_VISIT(traverse_module_state->__pyx_n_s_LoadImages); - Py_VISIT(traverse_module_state->__pyx_n_s_LoadImages___init); - Py_VISIT(traverse_module_state->__pyx_n_s_LoadImages___iter); - Py_VISIT(traverse_module_state->__pyx_n_s_LoadImages___len); - Py_VISIT(traverse_module_state->__pyx_n_s_LoadImages___next); - Py_VISIT(traverse_module_state->__pyx_n_s_LoadImages_new_video); - Py_VISIT(traverse_module_state->__pyx_kp_u_No_images_or_videos_found_in); - Py_VISIT(traverse_module_state->__pyx_n_u_Orientation); - Py_VISIT(traverse_module_state->__pyx_n_s_PIL); - Py_VISIT(traverse_module_state->__pyx_n_s_Path); - Py_VISIT(traverse_module_state->__pyx_n_s_Pool); - Py_VISIT(traverse_module_state->__pyx_n_s_ROTATE_180); - Py_VISIT(traverse_module_state->__pyx_n_s_ROTATE_270); - Py_VISIT(traverse_module_state->__pyx_n_s_ROTATE_90); - Py_VISIT(traverse_module_state->__pyx_n_s_StopIteration); - Py_VISIT(traverse_module_state->__pyx_kp_u_Supported_formats_are_images); - Py_VISIT(traverse_module_state->__pyx_n_s_TAGS); - Py_VISIT(traverse_module_state->__pyx_n_s_TRANSPOSE); - Py_VISIT(traverse_module_state->__pyx_n_s_TRANSVERSE); - Py_VISIT(traverse_module_state->__pyx_n_s_Thread); - Py_VISIT(traverse_module_state->__pyx_n_s_ThreadPool); - Py_VISIT(traverse_module_state->__pyx_n_s_VID_FORMATS); - Py_VISIT(traverse_module_state->__pyx_n_s_VideoCapture); - Py_VISIT(traverse_module_state->__pyx_n_s_ZipFile); - Py_VISIT(traverse_module_state->__pyx_kp_u__10); - Py_VISIT(traverse_module_state->__pyx_kp_u__18); - Py_VISIT(traverse_module_state->__pyx_n_s__21); - Py_VISIT(traverse_module_state->__pyx_n_u__21); - Py_VISIT(traverse_module_state->__pyx_kp_u__25); - Py_VISIT(traverse_module_state->__pyx_kp_u__26); - Py_VISIT(traverse_module_state->__pyx_n_s__3); - Py_VISIT(traverse_module_state->__pyx_kp_u__3); - Py_VISIT(traverse_module_state->__pyx_kp_u__4); - Py_VISIT(traverse_module_state->__pyx_kp_u__5); - Py_VISIT(traverse_module_state->__pyx_kp_u__6); - Py_VISIT(traverse_module_state->__pyx_n_s__64); - Py_VISIT(traverse_module_state->__pyx_kp_u__7); - Py_VISIT(traverse_module_state->__pyx_kp_u__8); - Py_VISIT(traverse_module_state->__pyx_kp_u__9); - Py_VISIT(traverse_module_state->__pyx_n_u_a); - Py_VISIT(traverse_module_state->__pyx_n_s_annotated_only); - Py_VISIT(traverse_module_state->__pyx_n_s_any); - Py_VISIT(traverse_module_state->__pyx_n_s_append); - Py_VISIT(traverse_module_state->__pyx_n_s_args); - Py_VISIT(traverse_module_state->__pyx_n_s_array); - Py_VISIT(traverse_module_state->__pyx_n_s_as_posix); - Py_VISIT(traverse_module_state->__pyx_n_s_ascontiguousarray); - Py_VISIT(traverse_module_state->__pyx_n_u_asf); - Py_VISIT(traverse_module_state->__pyx_n_s_astype); - Py_VISIT(traverse_module_state->__pyx_n_s_asyncio_coroutines); - Py_VISIT(traverse_module_state->__pyx_n_s_augment); - Py_VISIT(traverse_module_state->__pyx_n_s_auto); - Py_VISIT(traverse_module_state->__pyx_n_s_autosplit); - Py_VISIT(traverse_module_state->__pyx_n_s_autosplit_locals_genexpr); - Py_VISIT(traverse_module_state->__pyx_kp_u_autosplit_test_txt); - Py_VISIT(traverse_module_state->__pyx_kp_u_autosplit_train_txt); - Py_VISIT(traverse_module_state->__pyx_kp_u_autosplit_val_txt); - Py_VISIT(traverse_module_state->__pyx_n_u_avi); - Py_VISIT(traverse_module_state->__pyx_n_s_b); - Py_VISIT(traverse_module_state->__pyx_n_u_bmp); - Py_VISIT(traverse_module_state->__pyx_kp_u_box_failure_in); - Py_VISIT(traverse_module_state->__pyx_n_s_c); - Py_VISIT(traverse_module_state->__pyx_n_s_cap); - Py_VISIT(traverse_module_state->__pyx_n_s_choices); - Py_VISIT(traverse_module_state->__pyx_n_s_class_getitem); - Py_VISIT(traverse_module_state->__pyx_n_u_classifier); - Py_VISIT(traverse_module_state->__pyx_n_s_cline_in_traceback); - Py_VISIT(traverse_module_state->__pyx_n_s_clip); - Py_VISIT(traverse_module_state->__pyx_n_s_close); - Py_VISIT(traverse_module_state->__pyx_n_s_concatenate); - Py_VISIT(traverse_module_state->__pyx_n_s_copy); - Py_VISIT(traverse_module_state->__pyx_n_s_copyfile); - Py_VISIT(traverse_module_state->__pyx_n_s_count); - Py_VISIT(traverse_module_state->__pyx_n_s_create_folder); - Py_VISIT(traverse_module_state->__pyx_n_s_cv2); - Py_VISIT(traverse_module_state->__pyx_kp_u_datasets_coco128); - Py_VISIT(traverse_module_state->__pyx_kp_u_datasets_coco128_images); - Py_VISIT(traverse_module_state->__pyx_n_s_dict); - Py_VISIT(traverse_module_state->__pyx_kp_u_disable); - Py_VISIT(traverse_module_state->__pyx_n_u_dng); - Py_VISIT(traverse_module_state->__pyx_n_s_doc); - Py_VISIT(traverse_module_state->__pyx_kp_u_does_not_exist); - Py_VISIT(traverse_module_state->__pyx_n_s_dtype); - Py_VISIT(traverse_module_state->__pyx_kp_u_enable); - Py_VISIT(traverse_module_state->__pyx_n_s_encode); - Py_VISIT(traverse_module_state->__pyx_n_s_enter); - Py_VISIT(traverse_module_state->__pyx_n_s_enumerate); - Py_VISIT(traverse_module_state->__pyx_n_s_exif); - Py_VISIT(traverse_module_state->__pyx_n_u_exif); - Py_VISIT(traverse_module_state->__pyx_n_s_exif_size); - Py_VISIT(traverse_module_state->__pyx_n_s_exif_transpose); - Py_VISIT(traverse_module_state->__pyx_n_s_exists); - Py_VISIT(traverse_module_state->__pyx_n_s_exit); - Py_VISIT(traverse_module_state->__pyx_n_s_extract_boxes); - Py_VISIT(traverse_module_state->__pyx_n_s_f); - Py_VISIT(traverse_module_state->__pyx_n_s_file); - Py_VISIT(traverse_module_state->__pyx_n_s_files); - Py_VISIT(traverse_module_state->__pyx_n_u_flat); - Py_VISIT(traverse_module_state->__pyx_n_s_flatten_recursive); - Py_VISIT(traverse_module_state->__pyx_n_s_float32); - Py_VISIT(traverse_module_state->__pyx_n_s_frame); - Py_VISIT(traverse_module_state->__pyx_n_s_frames); - Py_VISIT(traverse_module_state->__pyx_n_s_full); - Py_VISIT(traverse_module_state->__pyx_n_s_functional); - Py_VISIT(traverse_module_state->__pyx_kp_u_gc); - Py_VISIT(traverse_module_state->__pyx_n_s_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_get); - Py_VISIT(traverse_module_state->__pyx_n_s_get_hash); - Py_VISIT(traverse_module_state->__pyx_n_s_get_hash_locals_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_getexif); - Py_VISIT(traverse_module_state->__pyx_n_s_getexif_2); - Py_VISIT(traverse_module_state->__pyx_n_s_getsize); - Py_VISIT(traverse_module_state->__pyx_n_u_gif); - Py_VISIT(traverse_module_state->__pyx_n_s_glob); - Py_VISIT(traverse_module_state->__pyx_n_s_h); - Py_VISIT(traverse_module_state->__pyx_n_s_h0); - Py_VISIT(traverse_module_state->__pyx_n_s_hashlib); - Py_VISIT(traverse_module_state->__pyx_n_s_hexdigest); - Py_VISIT(traverse_module_state->__pyx_n_s_hp); - Py_VISIT(traverse_module_state->__pyx_kp_u_https_github_com_ultralytics_yol); - Py_VISIT(traverse_module_state->__pyx_n_s_i); - Py_VISIT(traverse_module_state->__pyx_n_s_im); - Py_VISIT(traverse_module_state->__pyx_n_s_im_file); - Py_VISIT(traverse_module_state->__pyx_n_s_image); - Py_VISIT(traverse_module_state->__pyx_n_u_image); - Py_VISIT(traverse_module_state->__pyx_kp_u_image_2); - Py_VISIT(traverse_module_state->__pyx_n_s_images); - Py_VISIT(traverse_module_state->__pyx_n_u_images); - Py_VISIT(traverse_module_state->__pyx_n_s_img); - Py_VISIT(traverse_module_state->__pyx_n_s_img0); - Py_VISIT(traverse_module_state->__pyx_n_s_img2label_paths); - Py_VISIT(traverse_module_state->__pyx_n_s_img4); - Py_VISIT(traverse_module_state->__pyx_n_s_img9); - Py_VISIT(traverse_module_state->__pyx_n_s_img_files); - Py_VISIT(traverse_module_state->__pyx_n_s_img_hw); - Py_VISIT(traverse_module_state->__pyx_n_s_img_hw0); - Py_VISIT(traverse_module_state->__pyx_n_s_img_npy); - Py_VISIT(traverse_module_state->__pyx_n_s_img_paths); - Py_VISIT(traverse_module_state->__pyx_n_s_img_size); - Py_VISIT(traverse_module_state->__pyx_n_s_imgs); - Py_VISIT(traverse_module_state->__pyx_n_s_import); - Py_VISIT(traverse_module_state->__pyx_n_s_imread); - Py_VISIT(traverse_module_state->__pyx_n_s_imwrite); - Py_VISIT(traverse_module_state->__pyx_n_s_index); - Py_VISIT(traverse_module_state->__pyx_n_s_indices); - Py_VISIT(traverse_module_state->__pyx_n_s_info); - Py_VISIT(traverse_module_state->__pyx_n_s_init); - Py_VISIT(traverse_module_state->__pyx_n_s_init_subclass); - Py_VISIT(traverse_module_state->__pyx_n_s_initializing); - Py_VISIT(traverse_module_state->__pyx_n_s_int); - Py_VISIT(traverse_module_state->__pyx_n_s_interpolation); - Py_VISIT(traverse_module_state->__pyx_n_s_is_coroutine); - Py_VISIT(traverse_module_state->__pyx_n_s_is_dir); - Py_VISIT(traverse_module_state->__pyx_n_s_isdir); - Py_VISIT(traverse_module_state->__pyx_kp_u_isenabled); - Py_VISIT(traverse_module_state->__pyx_n_s_isfile); - Py_VISIT(traverse_module_state->__pyx_n_s_items); - Py_VISIT(traverse_module_state->__pyx_n_s_iter); - Py_VISIT(traverse_module_state->__pyx_n_s_itertools); - Py_VISIT(traverse_module_state->__pyx_n_s_j); - Py_VISIT(traverse_module_state->__pyx_n_s_join); - Py_VISIT(traverse_module_state->__pyx_n_u_jpeg); - Py_VISIT(traverse_module_state->__pyx_kp_u_jpg); - Py_VISIT(traverse_module_state->__pyx_n_u_jpg_2); - Py_VISIT(traverse_module_state->__pyx_n_s_json); - Py_VISIT(traverse_module_state->__pyx_n_s_k); - Py_VISIT(traverse_module_state->__pyx_n_s_keys); - Py_VISIT(traverse_module_state->__pyx_n_s_labels); - Py_VISIT(traverse_module_state->__pyx_n_u_labels); - Py_VISIT(traverse_module_state->__pyx_n_s_labels4); - Py_VISIT(traverse_module_state->__pyx_n_s_labels9); - Py_VISIT(traverse_module_state->__pyx_n_s_lb); - Py_VISIT(traverse_module_state->__pyx_n_s_lb_file); - Py_VISIT(traverse_module_state->__pyx_n_s_len); - Py_VISIT(traverse_module_state->__pyx_n_s_letterbox); - Py_VISIT(traverse_module_state->__pyx_n_s_load); - Py_VISIT(traverse_module_state->__pyx_n_s_load_image); - Py_VISIT(traverse_module_state->__pyx_n_s_load_mosaic); - Py_VISIT(traverse_module_state->__pyx_n_s_load_mosaic9); - Py_VISIT(traverse_module_state->__pyx_n_s_load_mosaic9_locals_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_load_mosaic_locals_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_lower); - Py_VISIT(traverse_module_state->__pyx_n_u_m4v); - Py_VISIT(traverse_module_state->__pyx_n_s_main); - Py_VISIT(traverse_module_state->__pyx_n_s_makedirs); - Py_VISIT(traverse_module_state->__pyx_n_s_math); - Py_VISIT(traverse_module_state->__pyx_n_s_md5); - Py_VISIT(traverse_module_state->__pyx_n_s_metaclass); - Py_VISIT(traverse_module_state->__pyx_n_s_method); - Py_VISIT(traverse_module_state->__pyx_n_s_missing_ok); - Py_VISIT(traverse_module_state->__pyx_n_s_mkdir); - Py_VISIT(traverse_module_state->__pyx_n_u_mkv); - Py_VISIT(traverse_module_state->__pyx_n_s_mode); - Py_VISIT(traverse_module_state->__pyx_n_s_module); - Py_VISIT(traverse_module_state->__pyx_n_s_mosaic_border); - Py_VISIT(traverse_module_state->__pyx_n_u_mov); - Py_VISIT(traverse_module_state->__pyx_n_u_mp4); - Py_VISIT(traverse_module_state->__pyx_n_u_mpeg); - Py_VISIT(traverse_module_state->__pyx_n_u_mpg); - Py_VISIT(traverse_module_state->__pyx_n_u_mpo); - Py_VISIT(traverse_module_state->__pyx_n_s_multiprocessing_pool); - Py_VISIT(traverse_module_state->__pyx_n_s_n); - Py_VISIT(traverse_module_state->__pyx_n_s_name); - Py_VISIT(traverse_module_state->__pyx_n_s_name_2); - Py_VISIT(traverse_module_state->__pyx_kp_u_new); - Py_VISIT(traverse_module_state->__pyx_n_s_new_path); - Py_VISIT(traverse_module_state->__pyx_n_s_new_video); - Py_VISIT(traverse_module_state->__pyx_n_s_next); - Py_VISIT(traverse_module_state->__pyx_n_s_nf); - Py_VISIT(traverse_module_state->__pyx_n_s_ni); - Py_VISIT(traverse_module_state->__pyx_n_s_nn); - Py_VISIT(traverse_module_state->__pyx_n_s_np); - Py_VISIT(traverse_module_state->__pyx_n_s_npy); - Py_VISIT(traverse_module_state->__pyx_n_s_numpy); - Py_VISIT(traverse_module_state->__pyx_n_s_nv); - Py_VISIT(traverse_module_state->__pyx_n_s_open); - Py_VISIT(traverse_module_state->__pyx_n_s_orientation); - Py_VISIT(traverse_module_state->__pyx_n_s_os); - Py_VISIT(traverse_module_state->__pyx_n_s_out); - Py_VISIT(traverse_module_state->__pyx_n_s_p); - Py_VISIT(traverse_module_state->__pyx_n_s_padh); - Py_VISIT(traverse_module_state->__pyx_n_s_padw); - Py_VISIT(traverse_module_state->__pyx_n_s_padx); - Py_VISIT(traverse_module_state->__pyx_n_s_pady); - Py_VISIT(traverse_module_state->__pyx_n_s_parent); - Py_VISIT(traverse_module_state->__pyx_n_s_parents); - Py_VISIT(traverse_module_state->__pyx_n_s_path); - Py_VISIT(traverse_module_state->__pyx_n_s_pathlib); - Py_VISIT(traverse_module_state->__pyx_n_s_paths); - Py_VISIT(traverse_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils); - Py_VISIT(traverse_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2); - Py_VISIT(traverse_module_state->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3); - Py_VISIT(traverse_module_state->__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4); - Py_VISIT(traverse_module_state->__pyx_n_u_png); - Py_VISIT(traverse_module_state->__pyx_n_s_prepare); - Py_VISIT(traverse_module_state->__pyx_n_s_print); - Py_VISIT(traverse_module_state->__pyx_n_s_qualname); - Py_VISIT(traverse_module_state->__pyx_n_s_r); - Py_VISIT(traverse_module_state->__pyx_n_s_random); - Py_VISIT(traverse_module_state->__pyx_n_s_ravel); - Py_VISIT(traverse_module_state->__pyx_n_s_read); - Py_VISIT(traverse_module_state->__pyx_n_s_recursive); - Py_VISIT(traverse_module_state->__pyx_n_s_relative_to); - Py_VISIT(traverse_module_state->__pyx_n_s_release); - Py_VISIT(traverse_module_state->__pyx_n_s_repeat); - Py_VISIT(traverse_module_state->__pyx_n_s_reshape); - Py_VISIT(traverse_module_state->__pyx_n_s_resize); - Py_VISIT(traverse_module_state->__pyx_n_s_resolve); - Py_VISIT(traverse_module_state->__pyx_n_s_ret_val); - Py_VISIT(traverse_module_state->__pyx_n_s_rglob); - Py_VISIT(traverse_module_state->__pyx_n_s_rmtree); - Py_VISIT(traverse_module_state->__pyx_n_s_rotation); - Py_VISIT(traverse_module_state->__pyx_n_s_rsplit); - Py_VISIT(traverse_module_state->__pyx_n_s_s); - Py_VISIT(traverse_module_state->__pyx_n_s_sa); - Py_VISIT(traverse_module_state->__pyx_n_s_sb); - Py_VISIT(traverse_module_state->__pyx_n_s_seed); - Py_VISIT(traverse_module_state->__pyx_n_s_segments); - Py_VISIT(traverse_module_state->__pyx_n_s_segments4); - Py_VISIT(traverse_module_state->__pyx_n_s_segments9); - Py_VISIT(traverse_module_state->__pyx_n_s_self); - Py_VISIT(traverse_module_state->__pyx_n_s_send); - Py_VISIT(traverse_module_state->__pyx_n_s_sep); - Py_VISIT(traverse_module_state->__pyx_n_s_set_name); - Py_VISIT(traverse_module_state->__pyx_n_s_shape); - Py_VISIT(traverse_module_state->__pyx_n_s_shuffle); - Py_VISIT(traverse_module_state->__pyx_n_s_shutil); - Py_VISIT(traverse_module_state->__pyx_n_s_size); - Py_VISIT(traverse_module_state->__pyx_n_s_spec); - Py_VISIT(traverse_module_state->__pyx_n_s_split); - Py_VISIT(traverse_module_state->__pyx_n_s_splitlines); - Py_VISIT(traverse_module_state->__pyx_n_s_stem); - Py_VISIT(traverse_module_state->__pyx_n_s_stride); - Py_VISIT(traverse_module_state->__pyx_n_s_strip); - Py_VISIT(traverse_module_state->__pyx_n_s_suffix); - Py_VISIT(traverse_module_state->__pyx_n_s_sum); - Py_VISIT(traverse_module_state->__pyx_n_s_super); - Py_VISIT(traverse_module_state->__pyx_n_s_test); - Py_VISIT(traverse_module_state->__pyx_n_s_threading); - Py_VISIT(traverse_module_state->__pyx_n_s_throw); - Py_VISIT(traverse_module_state->__pyx_n_u_tif); - Py_VISIT(traverse_module_state->__pyx_n_u_tiff); - Py_VISIT(traverse_module_state->__pyx_n_s_time); - Py_VISIT(traverse_module_state->__pyx_n_s_tobytes); - Py_VISIT(traverse_module_state->__pyx_n_s_torch); - Py_VISIT(traverse_module_state->__pyx_n_s_torch_nn_functional); - Py_VISIT(traverse_module_state->__pyx_n_s_total); - Py_VISIT(traverse_module_state->__pyx_n_s_tqdm); - Py_VISIT(traverse_module_state->__pyx_n_s_transpose); - Py_VISIT(traverse_module_state->__pyx_kp_u_txt); - Py_VISIT(traverse_module_state->__pyx_n_s_txt_2); - Py_VISIT(traverse_module_state->__pyx_n_s_uint8); - Py_VISIT(traverse_module_state->__pyx_n_s_uniform); - Py_VISIT(traverse_module_state->__pyx_n_s_unlink); - Py_VISIT(traverse_module_state->__pyx_n_s_update); - Py_VISIT(traverse_module_state->__pyx_kp_u_using_txt_labeled_images_only); - Py_VISIT(traverse_module_state->__pyx_n_u_video); - Py_VISIT(traverse_module_state->__pyx_kp_u_video_2); - Py_VISIT(traverse_module_state->__pyx_n_s_video_flag); - Py_VISIT(traverse_module_state->__pyx_kp_u_videos); - Py_VISIT(traverse_module_state->__pyx_n_s_videos_2); - Py_VISIT(traverse_module_state->__pyx_n_s_w); - Py_VISIT(traverse_module_state->__pyx_n_s_w0); - Py_VISIT(traverse_module_state->__pyx_n_u_webp); - Py_VISIT(traverse_module_state->__pyx_n_s_weights); - Py_VISIT(traverse_module_state->__pyx_n_u_wmv); - Py_VISIT(traverse_module_state->__pyx_n_s_wp); - Py_VISIT(traverse_module_state->__pyx_n_s_write); - Py_VISIT(traverse_module_state->__pyx_n_s_x); - Py_VISIT(traverse_module_state->__pyx_n_s_x1); - Py_VISIT(traverse_module_state->__pyx_n_s_x1a); - Py_VISIT(traverse_module_state->__pyx_n_s_x1b); - Py_VISIT(traverse_module_state->__pyx_n_s_x2); - Py_VISIT(traverse_module_state->__pyx_n_s_x2a); - Py_VISIT(traverse_module_state->__pyx_n_s_x2b); - Py_VISIT(traverse_module_state->__pyx_n_s_xc); - Py_VISIT(traverse_module_state->__pyx_n_s_xyn2xy); - Py_VISIT(traverse_module_state->__pyx_n_s_xywh2xyxy); - Py_VISIT(traverse_module_state->__pyx_n_s_xywhn2xyxy); - Py_VISIT(traverse_module_state->__pyx_n_s_y1); - Py_VISIT(traverse_module_state->__pyx_n_s_y1a); - Py_VISIT(traverse_module_state->__pyx_n_s_y1b); - Py_VISIT(traverse_module_state->__pyx_n_s_y2); - Py_VISIT(traverse_module_state->__pyx_n_s_y2a); - Py_VISIT(traverse_module_state->__pyx_n_s_y2b); - Py_VISIT(traverse_module_state->__pyx_n_s_yaml); - Py_VISIT(traverse_module_state->__pyx_n_s_yc); - Py_VISIT(traverse_module_state->__pyx_n_s_zip); - Py_VISIT(traverse_module_state->__pyx_n_s_zipfile); - Py_VISIT(traverse_module_state->__pyx_float_0_0); - Py_VISIT(traverse_module_state->__pyx_float_0_1); - Py_VISIT(traverse_module_state->__pyx_float_0_9); - Py_VISIT(traverse_module_state->__pyx_float_1_2); - Py_VISIT(traverse_module_state->__pyx_int_0); - Py_VISIT(traverse_module_state->__pyx_int_1); - Py_VISIT(traverse_module_state->__pyx_int_2); - Py_VISIT(traverse_module_state->__pyx_int_3); - Py_VISIT(traverse_module_state->__pyx_int_4); - Py_VISIT(traverse_module_state->__pyx_int_5); - Py_VISIT(traverse_module_state->__pyx_int_6); - Py_VISIT(traverse_module_state->__pyx_int_7); - Py_VISIT(traverse_module_state->__pyx_int_8); - Py_VISIT(traverse_module_state->__pyx_int_32); - Py_VISIT(traverse_module_state->__pyx_int_114); - Py_VISIT(traverse_module_state->__pyx_int_274); - Py_VISIT(traverse_module_state->__pyx_int_640); - Py_VISIT(traverse_module_state->__pyx_int_neg_1); - Py_VISIT(traverse_module_state->__pyx_tuple__2); - Py_VISIT(traverse_module_state->__pyx_slice__12); - Py_VISIT(traverse_module_state->__pyx_slice__14); - Py_VISIT(traverse_module_state->__pyx_slice__15); - Py_VISIT(traverse_module_state->__pyx_slice__16); - Py_VISIT(traverse_module_state->__pyx_slice__22); - Py_VISIT(traverse_module_state->__pyx_tuple__11); - Py_VISIT(traverse_module_state->__pyx_tuple__13); - Py_VISIT(traverse_module_state->__pyx_tuple__17); - Py_VISIT(traverse_module_state->__pyx_tuple__19); - Py_VISIT(traverse_module_state->__pyx_tuple__20); - Py_VISIT(traverse_module_state->__pyx_tuple__23); - Py_VISIT(traverse_module_state->__pyx_tuple__24); - Py_VISIT(traverse_module_state->__pyx_tuple__27); - Py_VISIT(traverse_module_state->__pyx_tuple__28); - Py_VISIT(traverse_module_state->__pyx_tuple__30); - Py_VISIT(traverse_module_state->__pyx_tuple__32); - Py_VISIT(traverse_module_state->__pyx_tuple__34); - Py_VISIT(traverse_module_state->__pyx_tuple__36); - Py_VISIT(traverse_module_state->__pyx_tuple__37); - Py_VISIT(traverse_module_state->__pyx_tuple__39); - Py_VISIT(traverse_module_state->__pyx_tuple__41); - Py_VISIT(traverse_module_state->__pyx_tuple__44); - Py_VISIT(traverse_module_state->__pyx_tuple__46); - Py_VISIT(traverse_module_state->__pyx_tuple__48); - Py_VISIT(traverse_module_state->__pyx_tuple__50); - Py_VISIT(traverse_module_state->__pyx_tuple__52); - Py_VISIT(traverse_module_state->__pyx_tuple__54); - Py_VISIT(traverse_module_state->__pyx_tuple__55); - Py_VISIT(traverse_module_state->__pyx_tuple__57); - Py_VISIT(traverse_module_state->__pyx_tuple__58); - Py_VISIT(traverse_module_state->__pyx_tuple__60); - Py_VISIT(traverse_module_state->__pyx_tuple__61); - Py_VISIT(traverse_module_state->__pyx_tuple__63); - Py_VISIT(traverse_module_state->__pyx_codeobj__29); - Py_VISIT(traverse_module_state->__pyx_codeobj__31); - Py_VISIT(traverse_module_state->__pyx_codeobj__33); - Py_VISIT(traverse_module_state->__pyx_codeobj__35); - Py_VISIT(traverse_module_state->__pyx_codeobj__38); - Py_VISIT(traverse_module_state->__pyx_codeobj__40); - Py_VISIT(traverse_module_state->__pyx_codeobj__42); - Py_VISIT(traverse_module_state->__pyx_codeobj__43); - Py_VISIT(traverse_module_state->__pyx_codeobj__45); - Py_VISIT(traverse_module_state->__pyx_codeobj__47); - Py_VISIT(traverse_module_state->__pyx_codeobj__49); - Py_VISIT(traverse_module_state->__pyx_codeobj__51); - Py_VISIT(traverse_module_state->__pyx_codeobj__53); - Py_VISIT(traverse_module_state->__pyx_codeobj__56); - Py_VISIT(traverse_module_state->__pyx_codeobj__59); - Py_VISIT(traverse_module_state->__pyx_codeobj__62); - return 0; -} -#endif -/* #### Code section: module_state_defines ### */ -#if CYTHON_USE_MODULE_STATE -#define __pyx_d __pyx_mstate_global->__pyx_d -#define __pyx_b __pyx_mstate_global->__pyx_b -#define __pyx_cython_runtime __pyx_mstate_global->__pyx_cython_runtime -#define __pyx_empty_tuple __pyx_mstate_global->__pyx_empty_tuple -#define __pyx_empty_bytes __pyx_mstate_global->__pyx_empty_bytes -#define __pyx_empty_unicode __pyx_mstate_global->__pyx_empty_unicode -#ifdef __Pyx_CyFunction_USED -#define __pyx_CyFunctionType __pyx_mstate_global->__pyx_CyFunctionType -#endif -#ifdef __Pyx_FusedFunction_USED -#define __pyx_FusedFunctionType __pyx_mstate_global->__pyx_FusedFunctionType -#endif -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit -#define __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr __pyx_mstate_global->__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr -#define __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr __pyx_mstate_global->__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr -#define __pyx_kp_u_ __pyx_mstate_global->__pyx_kp_u_ -#define __pyx_n_s_AssertionError __pyx_mstate_global->__pyx_n_s_AssertionError -#define __pyx_kp_u_Autosplitting_images_from __pyx_mstate_global->__pyx_kp_u_Autosplitting_images_from -#define __pyx_n_s_CAP_PROP_FRAME_COUNT __pyx_mstate_global->__pyx_n_s_CAP_PROP_FRAME_COUNT -#define __pyx_kp_u_ERROR __pyx_mstate_global->__pyx_kp_u_ERROR -#define __pyx_n_s_ExifTags __pyx_mstate_global->__pyx_n_s_ExifTags -#define __pyx_n_s_F __pyx_mstate_global->__pyx_n_s_F -#define __pyx_n_s_FLIP_LEFT_RIGHT __pyx_mstate_global->__pyx_n_s_FLIP_LEFT_RIGHT -#define __pyx_n_s_FLIP_TOP_BOTTOM __pyx_mstate_global->__pyx_n_s_FLIP_TOP_BOTTOM -#define __pyx_n_s_HELP_URL __pyx_mstate_global->__pyx_n_s_HELP_URL -#define __pyx_n_s_IMG_FORMATS __pyx_mstate_global->__pyx_n_s_IMG_FORMATS -#define __pyx_n_s_INTER_AREA __pyx_mstate_global->__pyx_n_s_INTER_AREA -#define __pyx_n_s_INTER_LINEAR __pyx_mstate_global->__pyx_n_s_INTER_LINEAR -#define __pyx_n_s_Image __pyx_mstate_global->__pyx_n_s_Image -#define __pyx_n_s_ImageOps __pyx_mstate_global->__pyx_n_s_ImageOps -#define __pyx_kp_u_Image_Not_Found __pyx_mstate_global->__pyx_kp_u_Image_Not_Found -#define __pyx_n_s_LoadImages __pyx_mstate_global->__pyx_n_s_LoadImages -#define __pyx_n_s_LoadImages___init __pyx_mstate_global->__pyx_n_s_LoadImages___init -#define __pyx_n_s_LoadImages___iter __pyx_mstate_global->__pyx_n_s_LoadImages___iter -#define __pyx_n_s_LoadImages___len __pyx_mstate_global->__pyx_n_s_LoadImages___len -#define __pyx_n_s_LoadImages___next __pyx_mstate_global->__pyx_n_s_LoadImages___next -#define __pyx_n_s_LoadImages_new_video __pyx_mstate_global->__pyx_n_s_LoadImages_new_video -#define __pyx_kp_u_No_images_or_videos_found_in __pyx_mstate_global->__pyx_kp_u_No_images_or_videos_found_in -#define __pyx_n_u_Orientation __pyx_mstate_global->__pyx_n_u_Orientation -#define __pyx_n_s_PIL __pyx_mstate_global->__pyx_n_s_PIL -#define __pyx_n_s_Path __pyx_mstate_global->__pyx_n_s_Path -#define __pyx_n_s_Pool __pyx_mstate_global->__pyx_n_s_Pool -#define __pyx_n_s_ROTATE_180 __pyx_mstate_global->__pyx_n_s_ROTATE_180 -#define __pyx_n_s_ROTATE_270 __pyx_mstate_global->__pyx_n_s_ROTATE_270 -#define __pyx_n_s_ROTATE_90 __pyx_mstate_global->__pyx_n_s_ROTATE_90 -#define __pyx_n_s_StopIteration __pyx_mstate_global->__pyx_n_s_StopIteration -#define __pyx_kp_u_Supported_formats_are_images __pyx_mstate_global->__pyx_kp_u_Supported_formats_are_images -#define __pyx_n_s_TAGS __pyx_mstate_global->__pyx_n_s_TAGS -#define __pyx_n_s_TRANSPOSE __pyx_mstate_global->__pyx_n_s_TRANSPOSE -#define __pyx_n_s_TRANSVERSE __pyx_mstate_global->__pyx_n_s_TRANSVERSE -#define __pyx_n_s_Thread __pyx_mstate_global->__pyx_n_s_Thread -#define __pyx_n_s_ThreadPool __pyx_mstate_global->__pyx_n_s_ThreadPool -#define __pyx_n_s_VID_FORMATS __pyx_mstate_global->__pyx_n_s_VID_FORMATS -#define __pyx_n_s_VideoCapture __pyx_mstate_global->__pyx_n_s_VideoCapture -#define __pyx_n_s_ZipFile __pyx_mstate_global->__pyx_n_s_ZipFile -#define __pyx_kp_u__10 __pyx_mstate_global->__pyx_kp_u__10 -#define __pyx_kp_u__18 __pyx_mstate_global->__pyx_kp_u__18 -#define __pyx_n_s__21 __pyx_mstate_global->__pyx_n_s__21 -#define __pyx_n_u__21 __pyx_mstate_global->__pyx_n_u__21 -#define __pyx_kp_u__25 __pyx_mstate_global->__pyx_kp_u__25 -#define __pyx_kp_u__26 __pyx_mstate_global->__pyx_kp_u__26 -#define __pyx_n_s__3 __pyx_mstate_global->__pyx_n_s__3 -#define __pyx_kp_u__3 __pyx_mstate_global->__pyx_kp_u__3 -#define __pyx_kp_u__4 __pyx_mstate_global->__pyx_kp_u__4 -#define __pyx_kp_u__5 __pyx_mstate_global->__pyx_kp_u__5 -#define __pyx_kp_u__6 __pyx_mstate_global->__pyx_kp_u__6 -#define __pyx_n_s__64 __pyx_mstate_global->__pyx_n_s__64 -#define __pyx_kp_u__7 __pyx_mstate_global->__pyx_kp_u__7 -#define __pyx_kp_u__8 __pyx_mstate_global->__pyx_kp_u__8 -#define __pyx_kp_u__9 __pyx_mstate_global->__pyx_kp_u__9 -#define __pyx_n_u_a __pyx_mstate_global->__pyx_n_u_a -#define __pyx_n_s_annotated_only __pyx_mstate_global->__pyx_n_s_annotated_only -#define __pyx_n_s_any __pyx_mstate_global->__pyx_n_s_any -#define __pyx_n_s_append __pyx_mstate_global->__pyx_n_s_append -#define __pyx_n_s_args __pyx_mstate_global->__pyx_n_s_args -#define __pyx_n_s_array __pyx_mstate_global->__pyx_n_s_array -#define __pyx_n_s_as_posix __pyx_mstate_global->__pyx_n_s_as_posix -#define __pyx_n_s_ascontiguousarray __pyx_mstate_global->__pyx_n_s_ascontiguousarray -#define __pyx_n_u_asf __pyx_mstate_global->__pyx_n_u_asf -#define __pyx_n_s_astype __pyx_mstate_global->__pyx_n_s_astype -#define __pyx_n_s_asyncio_coroutines __pyx_mstate_global->__pyx_n_s_asyncio_coroutines -#define __pyx_n_s_augment __pyx_mstate_global->__pyx_n_s_augment -#define __pyx_n_s_auto __pyx_mstate_global->__pyx_n_s_auto -#define __pyx_n_s_autosplit __pyx_mstate_global->__pyx_n_s_autosplit -#define __pyx_n_s_autosplit_locals_genexpr __pyx_mstate_global->__pyx_n_s_autosplit_locals_genexpr -#define __pyx_kp_u_autosplit_test_txt __pyx_mstate_global->__pyx_kp_u_autosplit_test_txt -#define __pyx_kp_u_autosplit_train_txt __pyx_mstate_global->__pyx_kp_u_autosplit_train_txt -#define __pyx_kp_u_autosplit_val_txt __pyx_mstate_global->__pyx_kp_u_autosplit_val_txt -#define __pyx_n_u_avi __pyx_mstate_global->__pyx_n_u_avi -#define __pyx_n_s_b __pyx_mstate_global->__pyx_n_s_b -#define __pyx_n_u_bmp __pyx_mstate_global->__pyx_n_u_bmp -#define __pyx_kp_u_box_failure_in __pyx_mstate_global->__pyx_kp_u_box_failure_in -#define __pyx_n_s_c __pyx_mstate_global->__pyx_n_s_c -#define __pyx_n_s_cap __pyx_mstate_global->__pyx_n_s_cap -#define __pyx_n_s_choices __pyx_mstate_global->__pyx_n_s_choices -#define __pyx_n_s_class_getitem __pyx_mstate_global->__pyx_n_s_class_getitem -#define __pyx_n_u_classifier __pyx_mstate_global->__pyx_n_u_classifier -#define __pyx_n_s_cline_in_traceback __pyx_mstate_global->__pyx_n_s_cline_in_traceback -#define __pyx_n_s_clip __pyx_mstate_global->__pyx_n_s_clip -#define __pyx_n_s_close __pyx_mstate_global->__pyx_n_s_close -#define __pyx_n_s_concatenate __pyx_mstate_global->__pyx_n_s_concatenate -#define __pyx_n_s_copy __pyx_mstate_global->__pyx_n_s_copy -#define __pyx_n_s_copyfile __pyx_mstate_global->__pyx_n_s_copyfile -#define __pyx_n_s_count __pyx_mstate_global->__pyx_n_s_count -#define __pyx_n_s_create_folder __pyx_mstate_global->__pyx_n_s_create_folder -#define __pyx_n_s_cv2 __pyx_mstate_global->__pyx_n_s_cv2 -#define __pyx_kp_u_datasets_coco128 __pyx_mstate_global->__pyx_kp_u_datasets_coco128 -#define __pyx_kp_u_datasets_coco128_images __pyx_mstate_global->__pyx_kp_u_datasets_coco128_images -#define __pyx_n_s_dict __pyx_mstate_global->__pyx_n_s_dict -#define __pyx_kp_u_disable __pyx_mstate_global->__pyx_kp_u_disable -#define __pyx_n_u_dng __pyx_mstate_global->__pyx_n_u_dng -#define __pyx_n_s_doc __pyx_mstate_global->__pyx_n_s_doc -#define __pyx_kp_u_does_not_exist __pyx_mstate_global->__pyx_kp_u_does_not_exist -#define __pyx_n_s_dtype __pyx_mstate_global->__pyx_n_s_dtype -#define __pyx_kp_u_enable __pyx_mstate_global->__pyx_kp_u_enable -#define __pyx_n_s_encode __pyx_mstate_global->__pyx_n_s_encode -#define __pyx_n_s_enter __pyx_mstate_global->__pyx_n_s_enter -#define __pyx_n_s_enumerate __pyx_mstate_global->__pyx_n_s_enumerate -#define __pyx_n_s_exif __pyx_mstate_global->__pyx_n_s_exif -#define __pyx_n_u_exif __pyx_mstate_global->__pyx_n_u_exif -#define __pyx_n_s_exif_size __pyx_mstate_global->__pyx_n_s_exif_size -#define __pyx_n_s_exif_transpose __pyx_mstate_global->__pyx_n_s_exif_transpose -#define __pyx_n_s_exists __pyx_mstate_global->__pyx_n_s_exists -#define __pyx_n_s_exit __pyx_mstate_global->__pyx_n_s_exit -#define __pyx_n_s_extract_boxes __pyx_mstate_global->__pyx_n_s_extract_boxes -#define __pyx_n_s_f __pyx_mstate_global->__pyx_n_s_f -#define __pyx_n_s_file __pyx_mstate_global->__pyx_n_s_file -#define __pyx_n_s_files __pyx_mstate_global->__pyx_n_s_files -#define __pyx_n_u_flat __pyx_mstate_global->__pyx_n_u_flat -#define __pyx_n_s_flatten_recursive __pyx_mstate_global->__pyx_n_s_flatten_recursive -#define __pyx_n_s_float32 __pyx_mstate_global->__pyx_n_s_float32 -#define __pyx_n_s_frame __pyx_mstate_global->__pyx_n_s_frame -#define __pyx_n_s_frames __pyx_mstate_global->__pyx_n_s_frames -#define __pyx_n_s_full __pyx_mstate_global->__pyx_n_s_full -#define __pyx_n_s_functional __pyx_mstate_global->__pyx_n_s_functional -#define __pyx_kp_u_gc __pyx_mstate_global->__pyx_kp_u_gc -#define __pyx_n_s_genexpr __pyx_mstate_global->__pyx_n_s_genexpr -#define __pyx_n_s_get __pyx_mstate_global->__pyx_n_s_get -#define __pyx_n_s_get_hash __pyx_mstate_global->__pyx_n_s_get_hash -#define __pyx_n_s_get_hash_locals_genexpr __pyx_mstate_global->__pyx_n_s_get_hash_locals_genexpr -#define __pyx_n_s_getexif __pyx_mstate_global->__pyx_n_s_getexif -#define __pyx_n_s_getexif_2 __pyx_mstate_global->__pyx_n_s_getexif_2 -#define __pyx_n_s_getsize __pyx_mstate_global->__pyx_n_s_getsize -#define __pyx_n_u_gif __pyx_mstate_global->__pyx_n_u_gif -#define __pyx_n_s_glob __pyx_mstate_global->__pyx_n_s_glob -#define __pyx_n_s_h __pyx_mstate_global->__pyx_n_s_h -#define __pyx_n_s_h0 __pyx_mstate_global->__pyx_n_s_h0 -#define __pyx_n_s_hashlib __pyx_mstate_global->__pyx_n_s_hashlib -#define __pyx_n_s_hexdigest __pyx_mstate_global->__pyx_n_s_hexdigest -#define __pyx_n_s_hp __pyx_mstate_global->__pyx_n_s_hp -#define __pyx_kp_u_https_github_com_ultralytics_yol __pyx_mstate_global->__pyx_kp_u_https_github_com_ultralytics_yol -#define __pyx_n_s_i __pyx_mstate_global->__pyx_n_s_i -#define __pyx_n_s_im __pyx_mstate_global->__pyx_n_s_im -#define __pyx_n_s_im_file __pyx_mstate_global->__pyx_n_s_im_file -#define __pyx_n_s_image __pyx_mstate_global->__pyx_n_s_image -#define __pyx_n_u_image __pyx_mstate_global->__pyx_n_u_image -#define __pyx_kp_u_image_2 __pyx_mstate_global->__pyx_kp_u_image_2 -#define __pyx_n_s_images __pyx_mstate_global->__pyx_n_s_images -#define __pyx_n_u_images __pyx_mstate_global->__pyx_n_u_images -#define __pyx_n_s_img __pyx_mstate_global->__pyx_n_s_img -#define __pyx_n_s_img0 __pyx_mstate_global->__pyx_n_s_img0 -#define __pyx_n_s_img2label_paths __pyx_mstate_global->__pyx_n_s_img2label_paths -#define __pyx_n_s_img4 __pyx_mstate_global->__pyx_n_s_img4 -#define __pyx_n_s_img9 __pyx_mstate_global->__pyx_n_s_img9 -#define __pyx_n_s_img_files __pyx_mstate_global->__pyx_n_s_img_files -#define __pyx_n_s_img_hw __pyx_mstate_global->__pyx_n_s_img_hw -#define __pyx_n_s_img_hw0 __pyx_mstate_global->__pyx_n_s_img_hw0 -#define __pyx_n_s_img_npy __pyx_mstate_global->__pyx_n_s_img_npy -#define __pyx_n_s_img_paths __pyx_mstate_global->__pyx_n_s_img_paths -#define __pyx_n_s_img_size __pyx_mstate_global->__pyx_n_s_img_size -#define __pyx_n_s_imgs __pyx_mstate_global->__pyx_n_s_imgs -#define __pyx_n_s_import __pyx_mstate_global->__pyx_n_s_import -#define __pyx_n_s_imread __pyx_mstate_global->__pyx_n_s_imread -#define __pyx_n_s_imwrite __pyx_mstate_global->__pyx_n_s_imwrite -#define __pyx_n_s_index __pyx_mstate_global->__pyx_n_s_index -#define __pyx_n_s_indices __pyx_mstate_global->__pyx_n_s_indices -#define __pyx_n_s_info __pyx_mstate_global->__pyx_n_s_info -#define __pyx_n_s_init __pyx_mstate_global->__pyx_n_s_init -#define __pyx_n_s_init_subclass __pyx_mstate_global->__pyx_n_s_init_subclass -#define __pyx_n_s_initializing __pyx_mstate_global->__pyx_n_s_initializing -#define __pyx_n_s_int __pyx_mstate_global->__pyx_n_s_int -#define __pyx_n_s_interpolation __pyx_mstate_global->__pyx_n_s_interpolation -#define __pyx_n_s_is_coroutine __pyx_mstate_global->__pyx_n_s_is_coroutine -#define __pyx_n_s_is_dir __pyx_mstate_global->__pyx_n_s_is_dir -#define __pyx_n_s_isdir __pyx_mstate_global->__pyx_n_s_isdir -#define __pyx_kp_u_isenabled __pyx_mstate_global->__pyx_kp_u_isenabled -#define __pyx_n_s_isfile __pyx_mstate_global->__pyx_n_s_isfile -#define __pyx_n_s_items __pyx_mstate_global->__pyx_n_s_items -#define __pyx_n_s_iter __pyx_mstate_global->__pyx_n_s_iter -#define __pyx_n_s_itertools __pyx_mstate_global->__pyx_n_s_itertools -#define __pyx_n_s_j __pyx_mstate_global->__pyx_n_s_j -#define __pyx_n_s_join __pyx_mstate_global->__pyx_n_s_join -#define __pyx_n_u_jpeg __pyx_mstate_global->__pyx_n_u_jpeg -#define __pyx_kp_u_jpg __pyx_mstate_global->__pyx_kp_u_jpg -#define __pyx_n_u_jpg_2 __pyx_mstate_global->__pyx_n_u_jpg_2 -#define __pyx_n_s_json __pyx_mstate_global->__pyx_n_s_json -#define __pyx_n_s_k __pyx_mstate_global->__pyx_n_s_k -#define __pyx_n_s_keys __pyx_mstate_global->__pyx_n_s_keys -#define __pyx_n_s_labels __pyx_mstate_global->__pyx_n_s_labels -#define __pyx_n_u_labels __pyx_mstate_global->__pyx_n_u_labels -#define __pyx_n_s_labels4 __pyx_mstate_global->__pyx_n_s_labels4 -#define __pyx_n_s_labels9 __pyx_mstate_global->__pyx_n_s_labels9 -#define __pyx_n_s_lb __pyx_mstate_global->__pyx_n_s_lb -#define __pyx_n_s_lb_file __pyx_mstate_global->__pyx_n_s_lb_file -#define __pyx_n_s_len __pyx_mstate_global->__pyx_n_s_len -#define __pyx_n_s_letterbox __pyx_mstate_global->__pyx_n_s_letterbox -#define __pyx_n_s_load __pyx_mstate_global->__pyx_n_s_load -#define __pyx_n_s_load_image __pyx_mstate_global->__pyx_n_s_load_image -#define __pyx_n_s_load_mosaic __pyx_mstate_global->__pyx_n_s_load_mosaic -#define __pyx_n_s_load_mosaic9 __pyx_mstate_global->__pyx_n_s_load_mosaic9 -#define __pyx_n_s_load_mosaic9_locals_genexpr __pyx_mstate_global->__pyx_n_s_load_mosaic9_locals_genexpr -#define __pyx_n_s_load_mosaic_locals_genexpr __pyx_mstate_global->__pyx_n_s_load_mosaic_locals_genexpr -#define __pyx_n_s_lower __pyx_mstate_global->__pyx_n_s_lower -#define __pyx_n_u_m4v __pyx_mstate_global->__pyx_n_u_m4v -#define __pyx_n_s_main __pyx_mstate_global->__pyx_n_s_main -#define __pyx_n_s_makedirs __pyx_mstate_global->__pyx_n_s_makedirs -#define __pyx_n_s_math __pyx_mstate_global->__pyx_n_s_math -#define __pyx_n_s_md5 __pyx_mstate_global->__pyx_n_s_md5 -#define __pyx_n_s_metaclass __pyx_mstate_global->__pyx_n_s_metaclass -#define __pyx_n_s_method __pyx_mstate_global->__pyx_n_s_method -#define __pyx_n_s_missing_ok __pyx_mstate_global->__pyx_n_s_missing_ok -#define __pyx_n_s_mkdir __pyx_mstate_global->__pyx_n_s_mkdir -#define __pyx_n_u_mkv __pyx_mstate_global->__pyx_n_u_mkv -#define __pyx_n_s_mode __pyx_mstate_global->__pyx_n_s_mode -#define __pyx_n_s_module __pyx_mstate_global->__pyx_n_s_module -#define __pyx_n_s_mosaic_border __pyx_mstate_global->__pyx_n_s_mosaic_border -#define __pyx_n_u_mov __pyx_mstate_global->__pyx_n_u_mov -#define __pyx_n_u_mp4 __pyx_mstate_global->__pyx_n_u_mp4 -#define __pyx_n_u_mpeg __pyx_mstate_global->__pyx_n_u_mpeg -#define __pyx_n_u_mpg __pyx_mstate_global->__pyx_n_u_mpg -#define __pyx_n_u_mpo __pyx_mstate_global->__pyx_n_u_mpo -#define __pyx_n_s_multiprocessing_pool __pyx_mstate_global->__pyx_n_s_multiprocessing_pool -#define __pyx_n_s_n __pyx_mstate_global->__pyx_n_s_n -#define __pyx_n_s_name __pyx_mstate_global->__pyx_n_s_name -#define __pyx_n_s_name_2 __pyx_mstate_global->__pyx_n_s_name_2 -#define __pyx_kp_u_new __pyx_mstate_global->__pyx_kp_u_new -#define __pyx_n_s_new_path __pyx_mstate_global->__pyx_n_s_new_path -#define __pyx_n_s_new_video __pyx_mstate_global->__pyx_n_s_new_video -#define __pyx_n_s_next __pyx_mstate_global->__pyx_n_s_next -#define __pyx_n_s_nf __pyx_mstate_global->__pyx_n_s_nf -#define __pyx_n_s_ni __pyx_mstate_global->__pyx_n_s_ni -#define __pyx_n_s_nn __pyx_mstate_global->__pyx_n_s_nn -#define __pyx_n_s_np __pyx_mstate_global->__pyx_n_s_np -#define __pyx_n_s_npy __pyx_mstate_global->__pyx_n_s_npy -#define __pyx_n_s_numpy __pyx_mstate_global->__pyx_n_s_numpy -#define __pyx_n_s_nv __pyx_mstate_global->__pyx_n_s_nv -#define __pyx_n_s_open __pyx_mstate_global->__pyx_n_s_open -#define __pyx_n_s_orientation __pyx_mstate_global->__pyx_n_s_orientation -#define __pyx_n_s_os __pyx_mstate_global->__pyx_n_s_os -#define __pyx_n_s_out __pyx_mstate_global->__pyx_n_s_out -#define __pyx_n_s_p __pyx_mstate_global->__pyx_n_s_p -#define __pyx_n_s_padh __pyx_mstate_global->__pyx_n_s_padh -#define __pyx_n_s_padw __pyx_mstate_global->__pyx_n_s_padw -#define __pyx_n_s_padx __pyx_mstate_global->__pyx_n_s_padx -#define __pyx_n_s_pady __pyx_mstate_global->__pyx_n_s_pady -#define __pyx_n_s_parent __pyx_mstate_global->__pyx_n_s_parent -#define __pyx_n_s_parents __pyx_mstate_global->__pyx_n_s_parents -#define __pyx_n_s_path __pyx_mstate_global->__pyx_n_s_path -#define __pyx_n_s_pathlib __pyx_mstate_global->__pyx_n_s_pathlib -#define __pyx_n_s_paths __pyx_mstate_global->__pyx_n_s_paths -#define __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils __pyx_mstate_global->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils -#define __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2 __pyx_mstate_global->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2 -#define __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3 __pyx_mstate_global->__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3 -#define __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4 __pyx_mstate_global->__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4 -#define __pyx_n_u_png __pyx_mstate_global->__pyx_n_u_png -#define __pyx_n_s_prepare __pyx_mstate_global->__pyx_n_s_prepare -#define __pyx_n_s_print __pyx_mstate_global->__pyx_n_s_print -#define __pyx_n_s_qualname __pyx_mstate_global->__pyx_n_s_qualname -#define __pyx_n_s_r __pyx_mstate_global->__pyx_n_s_r -#define __pyx_n_s_random __pyx_mstate_global->__pyx_n_s_random -#define __pyx_n_s_ravel __pyx_mstate_global->__pyx_n_s_ravel -#define __pyx_n_s_read __pyx_mstate_global->__pyx_n_s_read -#define __pyx_n_s_recursive __pyx_mstate_global->__pyx_n_s_recursive -#define __pyx_n_s_relative_to __pyx_mstate_global->__pyx_n_s_relative_to -#define __pyx_n_s_release __pyx_mstate_global->__pyx_n_s_release -#define __pyx_n_s_repeat __pyx_mstate_global->__pyx_n_s_repeat -#define __pyx_n_s_reshape __pyx_mstate_global->__pyx_n_s_reshape -#define __pyx_n_s_resize __pyx_mstate_global->__pyx_n_s_resize -#define __pyx_n_s_resolve __pyx_mstate_global->__pyx_n_s_resolve -#define __pyx_n_s_ret_val __pyx_mstate_global->__pyx_n_s_ret_val -#define __pyx_n_s_rglob __pyx_mstate_global->__pyx_n_s_rglob -#define __pyx_n_s_rmtree __pyx_mstate_global->__pyx_n_s_rmtree -#define __pyx_n_s_rotation __pyx_mstate_global->__pyx_n_s_rotation -#define __pyx_n_s_rsplit __pyx_mstate_global->__pyx_n_s_rsplit -#define __pyx_n_s_s __pyx_mstate_global->__pyx_n_s_s -#define __pyx_n_s_sa __pyx_mstate_global->__pyx_n_s_sa -#define __pyx_n_s_sb __pyx_mstate_global->__pyx_n_s_sb -#define __pyx_n_s_seed __pyx_mstate_global->__pyx_n_s_seed -#define __pyx_n_s_segments __pyx_mstate_global->__pyx_n_s_segments -#define __pyx_n_s_segments4 __pyx_mstate_global->__pyx_n_s_segments4 -#define __pyx_n_s_segments9 __pyx_mstate_global->__pyx_n_s_segments9 -#define __pyx_n_s_self __pyx_mstate_global->__pyx_n_s_self -#define __pyx_n_s_send __pyx_mstate_global->__pyx_n_s_send -#define __pyx_n_s_sep __pyx_mstate_global->__pyx_n_s_sep -#define __pyx_n_s_set_name __pyx_mstate_global->__pyx_n_s_set_name -#define __pyx_n_s_shape __pyx_mstate_global->__pyx_n_s_shape -#define __pyx_n_s_shuffle __pyx_mstate_global->__pyx_n_s_shuffle -#define __pyx_n_s_shutil __pyx_mstate_global->__pyx_n_s_shutil -#define __pyx_n_s_size __pyx_mstate_global->__pyx_n_s_size -#define __pyx_n_s_spec __pyx_mstate_global->__pyx_n_s_spec -#define __pyx_n_s_split __pyx_mstate_global->__pyx_n_s_split -#define __pyx_n_s_splitlines __pyx_mstate_global->__pyx_n_s_splitlines -#define __pyx_n_s_stem __pyx_mstate_global->__pyx_n_s_stem -#define __pyx_n_s_stride __pyx_mstate_global->__pyx_n_s_stride -#define __pyx_n_s_strip __pyx_mstate_global->__pyx_n_s_strip -#define __pyx_n_s_suffix __pyx_mstate_global->__pyx_n_s_suffix -#define __pyx_n_s_sum __pyx_mstate_global->__pyx_n_s_sum -#define __pyx_n_s_super __pyx_mstate_global->__pyx_n_s_super -#define __pyx_n_s_test __pyx_mstate_global->__pyx_n_s_test -#define __pyx_n_s_threading __pyx_mstate_global->__pyx_n_s_threading -#define __pyx_n_s_throw __pyx_mstate_global->__pyx_n_s_throw -#define __pyx_n_u_tif __pyx_mstate_global->__pyx_n_u_tif -#define __pyx_n_u_tiff __pyx_mstate_global->__pyx_n_u_tiff -#define __pyx_n_s_time __pyx_mstate_global->__pyx_n_s_time -#define __pyx_n_s_tobytes __pyx_mstate_global->__pyx_n_s_tobytes -#define __pyx_n_s_torch __pyx_mstate_global->__pyx_n_s_torch -#define __pyx_n_s_torch_nn_functional __pyx_mstate_global->__pyx_n_s_torch_nn_functional -#define __pyx_n_s_total __pyx_mstate_global->__pyx_n_s_total -#define __pyx_n_s_tqdm __pyx_mstate_global->__pyx_n_s_tqdm -#define __pyx_n_s_transpose __pyx_mstate_global->__pyx_n_s_transpose -#define __pyx_kp_u_txt __pyx_mstate_global->__pyx_kp_u_txt -#define __pyx_n_s_txt_2 __pyx_mstate_global->__pyx_n_s_txt_2 -#define __pyx_n_s_uint8 __pyx_mstate_global->__pyx_n_s_uint8 -#define __pyx_n_s_uniform __pyx_mstate_global->__pyx_n_s_uniform -#define __pyx_n_s_unlink __pyx_mstate_global->__pyx_n_s_unlink -#define __pyx_n_s_update __pyx_mstate_global->__pyx_n_s_update -#define __pyx_kp_u_using_txt_labeled_images_only __pyx_mstate_global->__pyx_kp_u_using_txt_labeled_images_only -#define __pyx_n_u_video __pyx_mstate_global->__pyx_n_u_video -#define __pyx_kp_u_video_2 __pyx_mstate_global->__pyx_kp_u_video_2 -#define __pyx_n_s_video_flag __pyx_mstate_global->__pyx_n_s_video_flag -#define __pyx_kp_u_videos __pyx_mstate_global->__pyx_kp_u_videos -#define __pyx_n_s_videos_2 __pyx_mstate_global->__pyx_n_s_videos_2 -#define __pyx_n_s_w __pyx_mstate_global->__pyx_n_s_w -#define __pyx_n_s_w0 __pyx_mstate_global->__pyx_n_s_w0 -#define __pyx_n_u_webp __pyx_mstate_global->__pyx_n_u_webp -#define __pyx_n_s_weights __pyx_mstate_global->__pyx_n_s_weights -#define __pyx_n_u_wmv __pyx_mstate_global->__pyx_n_u_wmv -#define __pyx_n_s_wp __pyx_mstate_global->__pyx_n_s_wp -#define __pyx_n_s_write __pyx_mstate_global->__pyx_n_s_write -#define __pyx_n_s_x __pyx_mstate_global->__pyx_n_s_x -#define __pyx_n_s_x1 __pyx_mstate_global->__pyx_n_s_x1 -#define __pyx_n_s_x1a __pyx_mstate_global->__pyx_n_s_x1a -#define __pyx_n_s_x1b __pyx_mstate_global->__pyx_n_s_x1b -#define __pyx_n_s_x2 __pyx_mstate_global->__pyx_n_s_x2 -#define __pyx_n_s_x2a __pyx_mstate_global->__pyx_n_s_x2a -#define __pyx_n_s_x2b __pyx_mstate_global->__pyx_n_s_x2b -#define __pyx_n_s_xc __pyx_mstate_global->__pyx_n_s_xc -#define __pyx_n_s_xyn2xy __pyx_mstate_global->__pyx_n_s_xyn2xy -#define __pyx_n_s_xywh2xyxy __pyx_mstate_global->__pyx_n_s_xywh2xyxy -#define __pyx_n_s_xywhn2xyxy __pyx_mstate_global->__pyx_n_s_xywhn2xyxy -#define __pyx_n_s_y1 __pyx_mstate_global->__pyx_n_s_y1 -#define __pyx_n_s_y1a __pyx_mstate_global->__pyx_n_s_y1a -#define __pyx_n_s_y1b __pyx_mstate_global->__pyx_n_s_y1b -#define __pyx_n_s_y2 __pyx_mstate_global->__pyx_n_s_y2 -#define __pyx_n_s_y2a __pyx_mstate_global->__pyx_n_s_y2a -#define __pyx_n_s_y2b __pyx_mstate_global->__pyx_n_s_y2b -#define __pyx_n_s_yaml __pyx_mstate_global->__pyx_n_s_yaml -#define __pyx_n_s_yc __pyx_mstate_global->__pyx_n_s_yc -#define __pyx_n_s_zip __pyx_mstate_global->__pyx_n_s_zip -#define __pyx_n_s_zipfile __pyx_mstate_global->__pyx_n_s_zipfile -#define __pyx_float_0_0 __pyx_mstate_global->__pyx_float_0_0 -#define __pyx_float_0_1 __pyx_mstate_global->__pyx_float_0_1 -#define __pyx_float_0_9 __pyx_mstate_global->__pyx_float_0_9 -#define __pyx_float_1_2 __pyx_mstate_global->__pyx_float_1_2 -#define __pyx_int_0 __pyx_mstate_global->__pyx_int_0 -#define __pyx_int_1 __pyx_mstate_global->__pyx_int_1 -#define __pyx_int_2 __pyx_mstate_global->__pyx_int_2 -#define __pyx_int_3 __pyx_mstate_global->__pyx_int_3 -#define __pyx_int_4 __pyx_mstate_global->__pyx_int_4 -#define __pyx_int_5 __pyx_mstate_global->__pyx_int_5 -#define __pyx_int_6 __pyx_mstate_global->__pyx_int_6 -#define __pyx_int_7 __pyx_mstate_global->__pyx_int_7 -#define __pyx_int_8 __pyx_mstate_global->__pyx_int_8 -#define __pyx_int_32 __pyx_mstate_global->__pyx_int_32 -#define __pyx_int_114 __pyx_mstate_global->__pyx_int_114 -#define __pyx_int_274 __pyx_mstate_global->__pyx_int_274 -#define __pyx_int_640 __pyx_mstate_global->__pyx_int_640 -#define __pyx_int_neg_1 __pyx_mstate_global->__pyx_int_neg_1 -#define __pyx_tuple__2 __pyx_mstate_global->__pyx_tuple__2 -#define __pyx_slice__12 __pyx_mstate_global->__pyx_slice__12 -#define __pyx_slice__14 __pyx_mstate_global->__pyx_slice__14 -#define __pyx_slice__15 __pyx_mstate_global->__pyx_slice__15 -#define __pyx_slice__16 __pyx_mstate_global->__pyx_slice__16 -#define __pyx_slice__22 __pyx_mstate_global->__pyx_slice__22 -#define __pyx_tuple__11 __pyx_mstate_global->__pyx_tuple__11 -#define __pyx_tuple__13 __pyx_mstate_global->__pyx_tuple__13 -#define __pyx_tuple__17 __pyx_mstate_global->__pyx_tuple__17 -#define __pyx_tuple__19 __pyx_mstate_global->__pyx_tuple__19 -#define __pyx_tuple__20 __pyx_mstate_global->__pyx_tuple__20 -#define __pyx_tuple__23 __pyx_mstate_global->__pyx_tuple__23 -#define __pyx_tuple__24 __pyx_mstate_global->__pyx_tuple__24 -#define __pyx_tuple__27 __pyx_mstate_global->__pyx_tuple__27 -#define __pyx_tuple__28 __pyx_mstate_global->__pyx_tuple__28 -#define __pyx_tuple__30 __pyx_mstate_global->__pyx_tuple__30 -#define __pyx_tuple__32 __pyx_mstate_global->__pyx_tuple__32 -#define __pyx_tuple__34 __pyx_mstate_global->__pyx_tuple__34 -#define __pyx_tuple__36 __pyx_mstate_global->__pyx_tuple__36 -#define __pyx_tuple__37 __pyx_mstate_global->__pyx_tuple__37 -#define __pyx_tuple__39 __pyx_mstate_global->__pyx_tuple__39 -#define __pyx_tuple__41 __pyx_mstate_global->__pyx_tuple__41 -#define __pyx_tuple__44 __pyx_mstate_global->__pyx_tuple__44 -#define __pyx_tuple__46 __pyx_mstate_global->__pyx_tuple__46 -#define __pyx_tuple__48 __pyx_mstate_global->__pyx_tuple__48 -#define __pyx_tuple__50 __pyx_mstate_global->__pyx_tuple__50 -#define __pyx_tuple__52 __pyx_mstate_global->__pyx_tuple__52 -#define __pyx_tuple__54 __pyx_mstate_global->__pyx_tuple__54 -#define __pyx_tuple__55 __pyx_mstate_global->__pyx_tuple__55 -#define __pyx_tuple__57 __pyx_mstate_global->__pyx_tuple__57 -#define __pyx_tuple__58 __pyx_mstate_global->__pyx_tuple__58 -#define __pyx_tuple__60 __pyx_mstate_global->__pyx_tuple__60 -#define __pyx_tuple__61 __pyx_mstate_global->__pyx_tuple__61 -#define __pyx_tuple__63 __pyx_mstate_global->__pyx_tuple__63 -#define __pyx_codeobj__29 __pyx_mstate_global->__pyx_codeobj__29 -#define __pyx_codeobj__31 __pyx_mstate_global->__pyx_codeobj__31 -#define __pyx_codeobj__33 __pyx_mstate_global->__pyx_codeobj__33 -#define __pyx_codeobj__35 __pyx_mstate_global->__pyx_codeobj__35 -#define __pyx_codeobj__38 __pyx_mstate_global->__pyx_codeobj__38 -#define __pyx_codeobj__40 __pyx_mstate_global->__pyx_codeobj__40 -#define __pyx_codeobj__42 __pyx_mstate_global->__pyx_codeobj__42 -#define __pyx_codeobj__43 __pyx_mstate_global->__pyx_codeobj__43 -#define __pyx_codeobj__45 __pyx_mstate_global->__pyx_codeobj__45 -#define __pyx_codeobj__47 __pyx_mstate_global->__pyx_codeobj__47 -#define __pyx_codeobj__49 __pyx_mstate_global->__pyx_codeobj__49 -#define __pyx_codeobj__51 __pyx_mstate_global->__pyx_codeobj__51 -#define __pyx_codeobj__53 __pyx_mstate_global->__pyx_codeobj__53 -#define __pyx_codeobj__56 __pyx_mstate_global->__pyx_codeobj__56 -#define __pyx_codeobj__59 __pyx_mstate_global->__pyx_codeobj__59 -#define __pyx_codeobj__62 __pyx_mstate_global->__pyx_codeobj__62 -#endif -/* #### Code section: module_code ### */ - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":42 - * - * - * def get_hash(paths): # <<<<<<<<<<<<<< - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_1get_hash(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_1get_hash = {"get_hash", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_1get_hash, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_1get_hash(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_paths = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("get_hash (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,0}; - #endif - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_paths)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 42, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "get_hash") < 0)) __PYX_ERR(0, 42, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_paths = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("get_hash", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 42, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.get_hash", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_get_hash(__pyx_self, __pyx_v_paths); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8get_hash_2generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":44 - * def get_hash(paths): - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes # <<<<<<<<<<<<<< - * h = hashlib.md5(str(size).encode()) # hash sizes - * h.update(''.join(paths).encode()) # hash paths - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8get_hash_genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 44, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8get_hash_2generator, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_get_hash_locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils); if (unlikely(!gen)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.get_hash.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8get_hash_2generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L7_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 44, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_paths)) { __Pyx_RaiseClosureNameError("paths"); __PYX_ERR(0, 44, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_paths)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_paths)) { - __pyx_t_1 = __pyx_cur_scope->__pyx_outer_scope->__pyx_v_paths; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_paths); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 44, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_2 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 44, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 44, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 44, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_p); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_p, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_os); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_path); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_exists); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_cur_scope->__pyx_v_p}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_8) { - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_os); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_path); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_getsize); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_cur_scope->__pyx_v_p}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - __Pyx_XGIVEREF(__pyx_t_1); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_1; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_3; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L7_resume_from_yield:; - __pyx_t_1 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_3 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 44, __pyx_L1_error) - } - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":42 - * - * - * def get_hash(paths): # <<<<<<<<<<<<<< - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_get_hash(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_paths) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *__pyx_cur_scope; - PyObject *__pyx_v_size = NULL; - PyObject *__pyx_v_h = NULL; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8get_hash_2generator = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_hash", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 42, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_paths = __pyx_v_paths; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_paths); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_paths); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":44 - * def get_hash(paths): - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes # <<<<<<<<<<<<<< - * h = hashlib.md5(str(size).encode()) # hash sizes - * h.update(''.join(paths).encode()) # hash paths - */ - __pyx_t_1 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8get_hash_genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_sum, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 44, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_size = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":45 - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - * h = hashlib.md5(str(size).encode()) # hash sizes # <<<<<<<<<<<<<< - * h.update(''.join(paths).encode()) # hash paths - * return h.hexdigest() # return hash - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_hashlib); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_md5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_Str(__pyx_v_size); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_4, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_6, 0+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_5, __pyx_t_1}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 1+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 45, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v_h = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":46 - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - * h = hashlib.md5(str(size).encode()) # hash sizes - * h.update(''.join(paths).encode()) # hash paths # <<<<<<<<<<<<<< - * return h.hexdigest() # return hash - * - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_h, __pyx_n_s_update); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 46, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __pyx_cur_scope->__pyx_v_paths; - __Pyx_INCREF(__pyx_t_1); - __pyx_t_5 = PyUnicode_Join(__pyx_kp_u_, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 46, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyUnicode_AsEncodedString(((PyObject*)__pyx_t_5), NULL, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 46, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_5, __pyx_t_1}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 1+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 46, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":47 - * h = hashlib.md5(str(size).encode()) # hash sizes - * h.update(''.join(paths).encode()) # hash paths - * return h.hexdigest() # return hash # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_h, __pyx_n_s_hexdigest); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 47, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_1, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 0+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 47, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":42 - * - * - * def get_hash(paths): # <<<<<<<<<<<<<< - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.get_hash", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_size); - __Pyx_XDECREF(__pyx_v_h); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8get_hash_2generator); - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":50 - * - * - * def exif_size(img): # <<<<<<<<<<<<<< - * # Returns exif-corrected PIL size - * s = img.size # (width, height) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_3exif_size(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_3exif_size = {"exif_size", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_3exif_size, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_3exif_size(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_img = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("exif_size (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_img,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_img,0}; - #endif - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_img)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 50, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "exif_size") < 0)) __PYX_ERR(0, 50, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_img = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("exif_size", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 50, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.exif_size", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_2exif_size(__pyx_self, __pyx_v_img); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_2exif_size(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_img) { - PyObject *__pyx_v_s = NULL; - PyObject *__pyx_v_rotation = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("exif_size", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":52 - * def exif_size(img): - * # Returns exif-corrected PIL size - * s = img.size # (width, height) # <<<<<<<<<<<<<< - * try: - * rotation = dict(img._getexif().items())[orientation] - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_img, __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 52, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_s = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":53 - * # Returns exif-corrected PIL size - * s = img.size # (width, height) - * try: # <<<<<<<<<<<<<< - * rotation = dict(img._getexif().items())[orientation] - * if rotation == 6: # rotation 270 - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":54 - * s = img.size # (width, height) - * try: - * rotation = dict(img._getexif().items())[orientation] # <<<<<<<<<<<<<< - * if rotation == 6: # rotation 270 - * s = (s[1], s[0]) - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_img, __pyx_n_s_getexif); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 54, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_7, }; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 54, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_items); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 54, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_8 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_5, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_8, 0+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 54, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyDict_Type)), __pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 54, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_orientation); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 54, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyDict_GetItem(__pyx_t_6, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 54, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_rotation = __pyx_t_5; - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":55 - * try: - * rotation = dict(img._getexif().items())[orientation] - * if rotation == 6: # rotation 270 # <<<<<<<<<<<<<< - * s = (s[1], s[0]) - * elif rotation == 8: # rotation 90 - */ - __pyx_t_5 = __Pyx_PyInt_EqObjC(__pyx_v_rotation, __pyx_int_6, 6, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 55, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely((__pyx_t_9 < 0))) __PYX_ERR(0, 55, __pyx_L3_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_9) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":56 - * rotation = dict(img._getexif().items())[orientation] - * if rotation == 6: # rotation 270 - * s = (s[1], s[0]) # <<<<<<<<<<<<<< - * elif rotation == 8: # rotation 90 - * s = (s[1], s[0]) - */ - __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_s, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 56, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_s, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 56, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 56, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_1); - __pyx_t_5 = 0; - __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_s, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":55 - * try: - * rotation = dict(img._getexif().items())[orientation] - * if rotation == 6: # rotation 270 # <<<<<<<<<<<<<< - * s = (s[1], s[0]) - * elif rotation == 8: # rotation 90 - */ - goto __pyx_L9; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":57 - * if rotation == 6: # rotation 270 - * s = (s[1], s[0]) - * elif rotation == 8: # rotation 90 # <<<<<<<<<<<<<< - * s = (s[1], s[0]) - * except: - */ - __pyx_t_6 = __Pyx_PyInt_EqObjC(__pyx_v_rotation, __pyx_int_8, 8, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 57, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_9 < 0))) __PYX_ERR(0, 57, __pyx_L3_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_9) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":58 - * s = (s[1], s[0]) - * elif rotation == 8: # rotation 90 - * s = (s[1], s[0]) # <<<<<<<<<<<<<< - * except: - * pass - */ - __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_s, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 58, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_s, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 58, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 58, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1); - __pyx_t_6 = 0; - __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_s, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":57 - * if rotation == 6: # rotation 270 - * s = (s[1], s[0]) - * elif rotation == 8: # rotation 90 # <<<<<<<<<<<<<< - * s = (s[1], s[0]) - * except: - */ - } - __pyx_L9:; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":53 - * # Returns exif-corrected PIL size - * s = img.size # (width, height) - * try: # <<<<<<<<<<<<<< - * rotation = dict(img._getexif().items())[orientation] - * if rotation == 6: # rotation 270 - */ - } - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L8_try_end; - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":59 - * elif rotation == 8: # rotation 90 - * s = (s[1], s[0]) - * except: # <<<<<<<<<<<<<< - * pass - * - */ - /*except:*/ { - __Pyx_ErrRestore(0,0,0); - goto __pyx_L4_exception_handled; - } - __pyx_L4_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - __pyx_L8_try_end:; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":62 - * pass - * - * return s # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_s); - __pyx_r = __pyx_v_s; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":50 - * - * - * def exif_size(img): # <<<<<<<<<<<<<< - * # Returns exif-corrected PIL size - * s = img.size # (width, height) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.exif_size", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_s); - __Pyx_XDECREF(__pyx_v_rotation); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":65 - * - * - * def exif_transpose(image): # <<<<<<<<<<<<<< - * """ - * Transpose a PIL image accordingly if it has an EXIF Orientation tag. - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_5exif_transpose(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_4exif_transpose, "\n Transpose a PIL image accordingly if it has an EXIF Orientation tag.\n Inplace version of https://github.com/python-pillow/Pillow/blob/master/src/PIL/ImageOps.py exif_transpose()\n\n :param image: The image to transpose.\n :return: An image.\n "); -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_5exif_transpose = {"exif_transpose", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_5exif_transpose, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_4exif_transpose}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_5exif_transpose(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_image = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("exif_transpose (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_image,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_image,0}; - #endif - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_image)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 65, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "exif_transpose") < 0)) __PYX_ERR(0, 65, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_image = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("exif_transpose", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 65, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.exif_transpose", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_4exif_transpose(__pyx_self, __pyx_v_image); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_4exif_transpose(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_image) { - PyObject *__pyx_v_exif = NULL; - PyObject *__pyx_v_orientation = NULL; - PyObject *__pyx_v_method = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("exif_transpose", 0); - __Pyx_INCREF(__pyx_v_image); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":73 - * :return: An image. - * """ - * exif = image.getexif() # <<<<<<<<<<<<<< - * orientation = exif.get(0x0112, 1) # default 1 - * if orientation > 1: - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_image, __pyx_n_s_getexif_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 73, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_v_exif = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":74 - * """ - * exif = image.getexif() - * orientation = exif.get(0x0112, 1) # default 1 # <<<<<<<<<<<<<< - * if orientation > 1: - * method = {2: Image.FLIP_LEFT_RIGHT, - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_exif, __pyx_n_s_get); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 74, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 74, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_orientation = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":75 - * exif = image.getexif() - * orientation = exif.get(0x0112, 1) # default 1 - * if orientation > 1: # <<<<<<<<<<<<<< - * method = {2: Image.FLIP_LEFT_RIGHT, - * 3: Image.ROTATE_180, - */ - __pyx_t_2 = PyObject_RichCompare(__pyx_v_orientation, __pyx_int_1, Py_GT); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 75, __pyx_L1_error) - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_5) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":76 - * orientation = exif.get(0x0112, 1) # default 1 - * if orientation > 1: - * method = {2: Image.FLIP_LEFT_RIGHT, # <<<<<<<<<<<<<< - * 3: Image.ROTATE_180, - * 4: Image.FLIP_TOP_BOTTOM, - */ - __pyx_t_2 = __Pyx_PyDict_NewPresized(7); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Image); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_FLIP_LEFT_RIGHT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyDict_SetItem(__pyx_t_2, __pyx_int_2, __pyx_t_3) < 0) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":77 - * if orientation > 1: - * method = {2: Image.FLIP_LEFT_RIGHT, - * 3: Image.ROTATE_180, # <<<<<<<<<<<<<< - * 4: Image.FLIP_TOP_BOTTOM, - * 5: Image.TRANSPOSE, - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Image); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_ROTATE_180); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_2, __pyx_int_3, __pyx_t_1) < 0) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":78 - * method = {2: Image.FLIP_LEFT_RIGHT, - * 3: Image.ROTATE_180, - * 4: Image.FLIP_TOP_BOTTOM, # <<<<<<<<<<<<<< - * 5: Image.TRANSPOSE, - * 6: Image.ROTATE_270, - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Image); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 78, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_FLIP_TOP_BOTTOM); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 78, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyDict_SetItem(__pyx_t_2, __pyx_int_4, __pyx_t_3) < 0) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":79 - * 3: Image.ROTATE_180, - * 4: Image.FLIP_TOP_BOTTOM, - * 5: Image.TRANSPOSE, # <<<<<<<<<<<<<< - * 6: Image.ROTATE_270, - * 7: Image.TRANSVERSE, - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Image); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 79, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_TRANSPOSE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 79, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_2, __pyx_int_5, __pyx_t_1) < 0) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":80 - * 4: Image.FLIP_TOP_BOTTOM, - * 5: Image.TRANSPOSE, - * 6: Image.ROTATE_270, # <<<<<<<<<<<<<< - * 7: Image.TRANSVERSE, - * 8: Image.ROTATE_90, - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Image); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_ROTATE_270); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyDict_SetItem(__pyx_t_2, __pyx_int_6, __pyx_t_3) < 0) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":81 - * 5: Image.TRANSPOSE, - * 6: Image.ROTATE_270, - * 7: Image.TRANSVERSE, # <<<<<<<<<<<<<< - * 8: Image.ROTATE_90, - * }.get(orientation) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Image); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_TRANSVERSE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 81, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_2, __pyx_int_7, __pyx_t_1) < 0) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":82 - * 6: Image.ROTATE_270, - * 7: Image.TRANSVERSE, - * 8: Image.ROTATE_90, # <<<<<<<<<<<<<< - * }.get(orientation) - * if method is not None: - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Image); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 82, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_ROTATE_90); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 82, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyDict_SetItem(__pyx_t_2, __pyx_int_8, __pyx_t_3) < 0) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":83 - * 7: Image.TRANSVERSE, - * 8: Image.ROTATE_90, - * }.get(orientation) # <<<<<<<<<<<<<< - * if method is not None: - * image = image.transpose(method) - */ - __pyx_t_3 = __Pyx_PyDict_GetItemDefault(__pyx_t_2, __pyx_v_orientation, Py_None); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 83, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_method = __pyx_t_3; - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":84 - * 8: Image.ROTATE_90, - * }.get(orientation) - * if method is not None: # <<<<<<<<<<<<<< - * image = image.transpose(method) - * del exif[0x0112] - */ - __pyx_t_5 = (__pyx_v_method != Py_None); - __pyx_t_6 = (__pyx_t_5 != 0); - if (__pyx_t_6) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":85 - * }.get(orientation) - * if method is not None: - * image = image.transpose(method) # <<<<<<<<<<<<<< - * del exif[0x0112] - * image.info["exif"] = exif.tobytes() - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_image, __pyx_n_s_transpose); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 85, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_v_method}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 85, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF_SET(__pyx_v_image, __pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":86 - * if method is not None: - * image = image.transpose(method) - * del exif[0x0112] # <<<<<<<<<<<<<< - * image.info["exif"] = exif.tobytes() - * return image - */ - if (unlikely((__Pyx_DelItemInt(__pyx_v_exif, 0x0112, long, 1, __Pyx_PyInt_From_long, 0, 0, 1) < 0))) __PYX_ERR(0, 86, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":87 - * image = image.transpose(method) - * del exif[0x0112] - * image.info["exif"] = exif.tobytes() # <<<<<<<<<<<<<< - * return image - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_exif, __pyx_n_s_tobytes); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_1, }; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_image, __pyx_n_s_info); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely((PyObject_SetItem(__pyx_t_2, __pyx_n_u_exif, __pyx_t_3) < 0))) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":84 - * 8: Image.ROTATE_90, - * }.get(orientation) - * if method is not None: # <<<<<<<<<<<<<< - * image = image.transpose(method) - * del exif[0x0112] - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":75 - * exif = image.getexif() - * orientation = exif.get(0x0112, 1) # default 1 - * if orientation > 1: # <<<<<<<<<<<<<< - * method = {2: Image.FLIP_LEFT_RIGHT, - * 3: Image.ROTATE_180, - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":88 - * del exif[0x0112] - * image.info["exif"] = exif.tobytes() - * return image # <<<<<<<<<<<<<< - * - * class LoadImages: - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_image); - __pyx_r = __pyx_v_image; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":65 - * - * - * def exif_transpose(image): # <<<<<<<<<<<<<< - * """ - * Transpose a PIL image accordingly if it has an EXIF Orientation tag. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.exif_transpose", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_exif); - __Pyx_XDECREF(__pyx_v_orientation); - __Pyx_XDECREF(__pyx_v_method); - __Pyx_XDECREF(__pyx_v_image); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":92 - * class LoadImages: - * # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4` - * def __init__(self, path, img_size=640, stride=32, auto=True): # <<<<<<<<<<<<<< - * p = str(Path(path).resolve()) # os-agnostic absolute path - * if '*' in p: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_1__init__ = {"__init__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_1__init__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_1__init__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_path = 0; - PyObject *__pyx_v_img_size = 0; - PyObject *__pyx_v_stride = 0; - PyObject *__pyx_v_auto = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_path,&__pyx_n_s_img_size,&__pyx_n_s_stride,&__pyx_n_s_auto,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_path,&__pyx_n_s_img_size,&__pyx_n_s_stride,&__pyx_n_s_auto,0}; - #endif - PyObject* values[5] = {0,0,0,0,0}; - values[2] = ((PyObject *)((PyObject *)__pyx_int_640)); - values[3] = ((PyObject *)((PyObject *)__pyx_int_32)); - values[4] = ((PyObject *)((PyObject *)Py_True)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_path)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("__init__", 0, 2, 5, 1); __PYX_ERR(0, 92, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_img_size); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_stride); - if (value) { values[3] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_auto); - if (value) { values[4] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__init__") < 0)) __PYX_ERR(0, 92, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_path = values[1]; - __pyx_v_img_size = values[2]; - __pyx_v_stride = values[3]; - __pyx_v_auto = values[4]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 0, 2, 5, __pyx_nargs); __PYX_ERR(0, 92, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages___init__(__pyx_self, __pyx_v_self, __pyx_v_path, __pyx_v_img_size, __pyx_v_stride, __pyx_v_auto); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_path, PyObject *__pyx_v_img_size, PyObject *__pyx_v_stride, PyObject *__pyx_v_auto) { - PyObject *__pyx_v_p = NULL; - PyObject *__pyx_v_files = NULL; - PyObject *__pyx_v_images = NULL; - PyObject *__pyx_v_videos = NULL; - PyObject *__pyx_v_ni = NULL; - PyObject *__pyx_v_nv = NULL; - PyObject *__pyx_8genexpr1__pyx_v_x = NULL; - PyObject *__pyx_8genexpr2__pyx_v_x = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - Py_ssize_t __pyx_t_11; - Py_UCS4 __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__init__", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":93 - * # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4` - * def __init__(self, path, img_size=640, stride=32, auto=True): - * p = str(Path(path).resolve()) # os-agnostic absolute path # <<<<<<<<<<<<<< - * if '*' in p: - * files = sorted(glob.glob(p, recursive=True)) # glob - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Path); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_v_path}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_resolve); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 0+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __Pyx_PyObject_Str(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 93, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_p = __pyx_t_3; - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":94 - * def __init__(self, path, img_size=640, stride=32, auto=True): - * p = str(Path(path).resolve()) # os-agnostic absolute path - * if '*' in p: # <<<<<<<<<<<<<< - * files = sorted(glob.glob(p, recursive=True)) # glob - * elif os.path.isdir(p): - */ - __pyx_t_6 = (__Pyx_PySequence_ContainsTF(__pyx_kp_u__3, __pyx_v_p, Py_EQ)); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 94, __pyx_L1_error) - __pyx_t_7 = (__pyx_t_6 != 0); - if (__pyx_t_7) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":95 - * p = str(Path(path).resolve()) # os-agnostic absolute path - * if '*' in p: - * files = sorted(glob.glob(p, recursive=True)) # glob # <<<<<<<<<<<<<< - * elif os.path.isdir(p): - * files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_glob); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_glob); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_p); - __Pyx_GIVEREF(__pyx_v_p); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_p); - __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_recursive, Py_True) < 0) __PYX_ERR(0, 95, __pyx_L1_error) - __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PySequence_List(__pyx_t_8); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 95, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_3 = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_9 = PyList_Sort(__pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 95, __pyx_L1_error) - __pyx_v_files = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":94 - * def __init__(self, path, img_size=640, stride=32, auto=True): - * p = str(Path(path).resolve()) # os-agnostic absolute path - * if '*' in p: # <<<<<<<<<<<<<< - * files = sorted(glob.glob(p, recursive=True)) # glob - * elif os.path.isdir(p): - */ - goto __pyx_L3; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":96 - * if '*' in p: - * files = sorted(glob.glob(p, recursive=True)) # glob - * elif os.path.isdir(p): # <<<<<<<<<<<<<< - * files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - * elif os.path.isfile(p): - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_os); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_path); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_isdir); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_8, __pyx_v_p}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_7) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":97 - * files = sorted(glob.glob(p, recursive=True)) # glob - * elif os.path.isdir(p): - * files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir # <<<<<<<<<<<<<< - * elif os.path.isfile(p): - * files = [p] # files - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_glob); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_glob); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_os); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_path); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_10, __pyx_n_s_join); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_10, __pyx_v_p, __pyx_kp_u__4}; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_8}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_t_1 = PySequence_List(__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 97, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_3 = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_9 = PyList_Sort(__pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 97, __pyx_L1_error) - __pyx_v_files = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":96 - * if '*' in p: - * files = sorted(glob.glob(p, recursive=True)) # glob - * elif os.path.isdir(p): # <<<<<<<<<<<<<< - * files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - * elif os.path.isfile(p): - */ - goto __pyx_L3; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":98 - * elif os.path.isdir(p): - * files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - * elif os.path.isfile(p): # <<<<<<<<<<<<<< - * files = [p] # files - * else: - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_os); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_path); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_isfile); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_v_p}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (likely(__pyx_t_7)) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":99 - * files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - * elif os.path.isfile(p): - * files = [p] # files # <<<<<<<<<<<<<< - * else: - * raise Exception(f'ERROR: {p} does not exist') - */ - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_p); - __Pyx_GIVEREF(__pyx_v_p); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_v_p); - __pyx_v_files = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":98 - * elif os.path.isdir(p): - * files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - * elif os.path.isfile(p): # <<<<<<<<<<<<<< - * files = [p] # files - * else: - */ - goto __pyx_L3; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":101 - * files = [p] # files - * else: - * raise Exception(f'ERROR: {p} does not exist') # <<<<<<<<<<<<<< - * - * images = [x for x in files if x.split('.')[-1].lower() in IMG_FORMATS] - */ - /*else*/ { - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = 0; - __pyx_t_12 = 127; - __Pyx_INCREF(__pyx_kp_u_ERROR); - __pyx_t_11 += 7; - __Pyx_GIVEREF(__pyx_kp_u_ERROR); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_kp_u_ERROR); - __pyx_t_1 = __Pyx_PyObject_FormatSimple(__pyx_v_p, __pyx_empty_unicode); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_12 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_1) > __pyx_t_12) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_1) : __pyx_t_12; - __pyx_t_11 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_INCREF(__pyx_kp_u_does_not_exist); - __pyx_t_11 += 15; - __Pyx_GIVEREF(__pyx_kp_u_does_not_exist); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_kp_u_does_not_exist); - __pyx_t_1 = __Pyx_PyUnicode_Join(__pyx_t_3, 3, __pyx_t_11, __pyx_t_12); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(0, 101, __pyx_L1_error) - } - __pyx_L3:; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":103 - * raise Exception(f'ERROR: {p} does not exist') - * - * images = [x for x in files if x.split('.')[-1].lower() in IMG_FORMATS] # <<<<<<<<<<<<<< - * videos = [x for x in files if x.split('.')[-1].lower() in VID_FORMATS] - * ni, nv = len(images), len(videos) - */ - { /* enter inner scope */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 103, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __pyx_v_files; __Pyx_INCREF(__pyx_t_1); __pyx_t_11 = 0; - for (;;) { - if (__pyx_t_11 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_11); __Pyx_INCREF(__pyx_t_4); __pyx_t_11++; if (unlikely((0 < 0))) __PYX_ERR(0, 103, __pyx_L6_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_11); __pyx_t_11++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 103, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_XDECREF_SET(__pyx_8genexpr1__pyx_v_x, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr1__pyx_v_x, __pyx_n_s_split); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 103, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_10 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_10, __pyx_kp_u__5}; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 103, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_8, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 103, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_lower); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 103, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_5, 0+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 103, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_IMG_FORMATS); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 103, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_7 = (__Pyx_PySequence_ContainsTF(__pyx_t_4, __pyx_t_8, Py_EQ)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 103, __pyx_L6_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_6 = (__pyx_t_7 != 0); - if (__pyx_t_6) { - if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_8genexpr1__pyx_v_x))) __PYX_ERR(0, 103, __pyx_L6_error) - } - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_x); __pyx_8genexpr1__pyx_v_x = 0; - goto __pyx_L10_exit_scope; - __pyx_L6_error:; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_x); __pyx_8genexpr1__pyx_v_x = 0; - goto __pyx_L1_error; - __pyx_L10_exit_scope:; - } /* exit inner scope */ - __pyx_v_images = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":104 - * - * images = [x for x in files if x.split('.')[-1].lower() in IMG_FORMATS] - * videos = [x for x in files if x.split('.')[-1].lower() in VID_FORMATS] # <<<<<<<<<<<<<< - * ni, nv = len(images), len(videos) - * - */ - { /* enter inner scope */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 104, __pyx_L13_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __pyx_v_files; __Pyx_INCREF(__pyx_t_1); __pyx_t_11 = 0; - for (;;) { - if (__pyx_t_11 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_8 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_11); __Pyx_INCREF(__pyx_t_8); __pyx_t_11++; if (unlikely((0 < 0))) __PYX_ERR(0, 104, __pyx_L13_error) - #else - __pyx_t_8 = PySequence_ITEM(__pyx_t_1, __pyx_t_11); __pyx_t_11++; if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 104, __pyx_L13_error) - __Pyx_GOTREF(__pyx_t_8); - #endif - __Pyx_XDECREF_SET(__pyx_8genexpr2__pyx_v_x, __pyx_t_8); - __pyx_t_8 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr2__pyx_v_x, __pyx_n_s_split); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 104, __pyx_L13_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_10 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_10, __pyx_kp_u__5}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 104, __pyx_L13_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_4, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 104, __pyx_L13_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_lower); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 104, __pyx_L13_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_5, 0+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 104, __pyx_L13_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_VID_FORMATS); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 104, __pyx_L13_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = (__Pyx_PySequence_ContainsTF(__pyx_t_8, __pyx_t_4, Py_EQ)); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 104, __pyx_L13_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_7 = (__pyx_t_6 != 0); - if (__pyx_t_7) { - if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_8genexpr2__pyx_v_x))) __PYX_ERR(0, 104, __pyx_L13_error) - } - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_x); __pyx_8genexpr2__pyx_v_x = 0; - goto __pyx_L17_exit_scope; - __pyx_L13_error:; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_x); __pyx_8genexpr2__pyx_v_x = 0; - goto __pyx_L1_error; - __pyx_L17_exit_scope:; - } /* exit inner scope */ - __pyx_v_videos = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":105 - * images = [x for x in files if x.split('.')[-1].lower() in IMG_FORMATS] - * videos = [x for x in files if x.split('.')[-1].lower() in VID_FORMATS] - * ni, nv = len(images), len(videos) # <<<<<<<<<<<<<< - * - * self.img_size = img_size - */ - __pyx_t_11 = PyList_GET_SIZE(__pyx_v_images); if (unlikely(__pyx_t_11 == ((Py_ssize_t)-1))) __PYX_ERR(0, 105, __pyx_L1_error) - __pyx_t_3 = PyInt_FromSsize_t(__pyx_t_11); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = PyList_GET_SIZE(__pyx_v_videos); if (unlikely(__pyx_t_11 == ((Py_ssize_t)-1))) __PYX_ERR(0, 105, __pyx_L1_error) - __pyx_t_1 = PyInt_FromSsize_t(__pyx_t_11); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_ni = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_nv = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":107 - * ni, nv = len(images), len(videos) - * - * self.img_size = img_size # <<<<<<<<<<<<<< - * self.stride = stride - * self.files = images + videos - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_img_size, __pyx_v_img_size) < 0) __PYX_ERR(0, 107, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":108 - * - * self.img_size = img_size - * self.stride = stride # <<<<<<<<<<<<<< - * self.files = images + videos - * self.nf = ni + nv # number of files - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_stride, __pyx_v_stride) < 0) __PYX_ERR(0, 108, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":109 - * self.img_size = img_size - * self.stride = stride - * self.files = images + videos # <<<<<<<<<<<<<< - * self.nf = ni + nv # number of files - * self.video_flag = [False] * ni + [True] * nv - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_images, __pyx_v_videos); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 109, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_files, __pyx_t_1) < 0) __PYX_ERR(0, 109, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":110 - * self.stride = stride - * self.files = images + videos - * self.nf = ni + nv # number of files # <<<<<<<<<<<<<< - * self.video_flag = [False] * ni + [True] * nv - * self.mode = 'image' - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_ni, __pyx_v_nv); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_nf, __pyx_t_1) < 0) __PYX_ERR(0, 110, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":111 - * self.files = images + videos - * self.nf = ni + nv # number of files - * self.video_flag = [False] * ni + [True] * nv # <<<<<<<<<<<<<< - * self.mode = 'image' - * self.auto = auto - */ - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(Py_False); - __Pyx_GIVEREF(Py_False); - PyList_SET_ITEM(__pyx_t_1, 0, Py_False); - { PyObject* __pyx_temp = PyNumber_InPlaceMultiply(__pyx_t_1, __pyx_v_ni); if (unlikely(!__pyx_temp)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_temp); - __Pyx_DECREF(__pyx_t_1); - __pyx_t_1 = __pyx_temp; - } - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(Py_True); - __Pyx_GIVEREF(Py_True); - PyList_SET_ITEM(__pyx_t_3, 0, Py_True); - { PyObject* __pyx_temp = PyNumber_InPlaceMultiply(__pyx_t_3, __pyx_v_nv); if (unlikely(!__pyx_temp)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_temp); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_temp; - } - __pyx_t_4 = PyNumber_Add(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_video_flag, __pyx_t_4) < 0) __PYX_ERR(0, 111, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":112 - * self.nf = ni + nv # number of files - * self.video_flag = [False] * ni + [True] * nv - * self.mode = 'image' # <<<<<<<<<<<<<< - * self.auto = auto - * if any(videos): - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_mode, __pyx_n_u_image) < 0) __PYX_ERR(0, 112, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":113 - * self.video_flag = [False] * ni + [True] * nv - * self.mode = 'image' - * self.auto = auto # <<<<<<<<<<<<<< - * if any(videos): - * self.new_video(videos[0]) # new video - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_auto, __pyx_v_auto) < 0) __PYX_ERR(0, 113, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":114 - * self.mode = 'image' - * self.auto = auto - * if any(videos): # <<<<<<<<<<<<<< - * self.new_video(videos[0]) # new video - * else: - */ - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_any, __pyx_v_videos); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 114, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 114, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_7) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":115 - * self.auto = auto - * if any(videos): - * self.new_video(videos[0]) # new video # <<<<<<<<<<<<<< - * else: - * self.cap = None - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_new_video); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 115, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_GetItemInt_List(__pyx_v_videos, 0, long, 1, __Pyx_PyInt_From_long, 1, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 115, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_8, __pyx_t_1}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 115, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":114 - * self.mode = 'image' - * self.auto = auto - * if any(videos): # <<<<<<<<<<<<<< - * self.new_video(videos[0]) # new video - * else: - */ - goto __pyx_L18; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":117 - * self.new_video(videos[0]) # new video - * else: - * self.cap = None # <<<<<<<<<<<<<< - * assert self.nf > 0, f'No images or videos found in {p}. ' \ - * f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' - */ - /*else*/ { - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_cap, Py_None) < 0) __PYX_ERR(0, 117, __pyx_L1_error) - } - __pyx_L18:; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":118 - * else: - * self.cap = None - * assert self.nf > 0, f'No images or videos found in {p}. ' \ # <<<<<<<<<<<<<< - * f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' - * - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_nf); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_4, __pyx_int_0, Py_GT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_7)) { - __pyx_t_3 = PyTuple_New(6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_11 = 0; - __pyx_t_12 = 127; - __Pyx_INCREF(__pyx_kp_u_No_images_or_videos_found_in); - __pyx_t_11 += 29; - __Pyx_GIVEREF(__pyx_kp_u_No_images_or_videos_found_in); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_kp_u_No_images_or_videos_found_in); - __pyx_t_4 = __Pyx_PyObject_FormatSimple(__pyx_v_p, __pyx_empty_unicode); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_12 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) > __pyx_t_12) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) : __pyx_t_12; - __pyx_t_11 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_kp_u_Supported_formats_are_images); - __pyx_t_11 += 33; - __Pyx_GIVEREF(__pyx_kp_u_Supported_formats_are_images); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_kp_u_Supported_formats_are_images); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":119 - * self.cap = None - * assert self.nf > 0, f'No images or videos found in {p}. ' \ - * f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' # <<<<<<<<<<<<<< - * - * def __iter__(self): - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_IMG_FORMATS); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 119, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_PyObject_FormatSimple(__pyx_t_4, __pyx_empty_unicode); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 119, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_12 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_1) > __pyx_t_12) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_1) : __pyx_t_12; - __pyx_t_11 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_INCREF(__pyx_kp_u_videos); - __pyx_t_11 += 9; - __Pyx_GIVEREF(__pyx_kp_u_videos); - PyTuple_SET_ITEM(__pyx_t_3, 4, __pyx_kp_u_videos); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_VID_FORMATS); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 119, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_FormatSimple(__pyx_t_1, __pyx_empty_unicode); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 119, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_12 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) > __pyx_t_12) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_4) : __pyx_t_12; - __pyx_t_11 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_3, 5, __pyx_t_4); - __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":118 - * else: - * self.cap = None - * assert self.nf > 0, f'No images or videos found in {p}. ' \ # <<<<<<<<<<<<<< - * f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' - * - */ - __pyx_t_4 = __Pyx_PyUnicode_Join(__pyx_t_3, 6, __pyx_t_11, __pyx_t_12); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_builtin_AssertionError, __pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(0, 118, __pyx_L1_error) - } - } - #else - if ((1)); else __PYX_ERR(0, 118, __pyx_L1_error) - #endif - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":92 - * class LoadImages: - * # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4` - * def __init__(self, path, img_size=640, stride=32, auto=True): # <<<<<<<<<<<<<< - * p = str(Path(path).resolve()) # os-agnostic absolute path - * if '*' in p: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_p); - __Pyx_XDECREF(__pyx_v_files); - __Pyx_XDECREF(__pyx_v_images); - __Pyx_XDECREF(__pyx_v_videos); - __Pyx_XDECREF(__pyx_v_ni); - __Pyx_XDECREF(__pyx_v_nv); - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_x); - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_x); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":121 - * f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' - * - * def __iter__(self): # <<<<<<<<<<<<<< - * self.count = 0 - * return self - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_3__iter__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_3__iter__ = {"__iter__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_3__iter__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_3__iter__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__iter__ (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #endif - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 121, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__iter__") < 0)) __PYX_ERR(0, 121, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__iter__", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 121, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.__iter__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_2__iter__(__pyx_self, __pyx_v_self); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_2__iter__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__iter__", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":122 - * - * def __iter__(self): - * self.count = 0 # <<<<<<<<<<<<<< - * return self - * - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_count, __pyx_int_0) < 0) __PYX_ERR(0, 122, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":123 - * def __iter__(self): - * self.count = 0 - * return self # <<<<<<<<<<<<<< - * - * def __next__(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self); - __pyx_r = __pyx_v_self; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":121 - * f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' - * - * def __iter__(self): # <<<<<<<<<<<<<< - * self.count = 0 - * return self - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.__iter__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":125 - * return self - * - * def __next__(self): # <<<<<<<<<<<<<< - * if self.count == self.nf: - * raise StopIteration - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_5__next__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_5__next__ = {"__next__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_5__next__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_5__next__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__next__ (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #endif - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 125, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__next__") < 0)) __PYX_ERR(0, 125, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__next__", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 125, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_4__next__(__pyx_self, __pyx_v_self); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_4__next__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_path = NULL; - PyObject *__pyx_v_ret_val = NULL; - PyObject *__pyx_v_img0 = NULL; - PyObject *__pyx_v_s = NULL; - PyObject *__pyx_v_img = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *(*__pyx_t_7)(PyObject *); - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_UCS4 __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__next__", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":126 - * - * def __next__(self): - * if self.count == self.nf: # <<<<<<<<<<<<<< - * raise StopIteration - * path = self.files[self.count] - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_count); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_nf); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_t_2, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(__pyx_t_4)) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":127 - * def __next__(self): - * if self.count == self.nf: - * raise StopIteration # <<<<<<<<<<<<<< - * path = self.files[self.count] - * - */ - __Pyx_Raise(__pyx_builtin_StopIteration, 0, 0, 0); - __PYX_ERR(0, 127, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":126 - * - * def __next__(self): - * if self.count == self.nf: # <<<<<<<<<<<<<< - * raise StopIteration - * path = self.files[self.count] - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":128 - * if self.count == self.nf: - * raise StopIteration - * path = self.files[self.count] # <<<<<<<<<<<<<< - * - * if self.video_flag[self.count]: - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_files); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_count); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_path = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":130 - * path = self.files[self.count] - * - * if self.video_flag[self.count]: # <<<<<<<<<<<<<< - * # Read video - * self.mode = 'video' - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_video_flag); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_count); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 130, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_4) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":132 - * if self.video_flag[self.count]: - * # Read video - * self.mode = 'video' # <<<<<<<<<<<<<< - * ret_val, img0 = self.cap.read() - * while not ret_val: - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_mode, __pyx_n_u_video) < 0) __PYX_ERR(0, 132, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":133 - * # Read video - * self.mode = 'video' - * ret_val, img0 = self.cap.read() # <<<<<<<<<<<<<< - * while not ret_val: - * self.count += 1 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_cap); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_read); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 0+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_3))) || (PyList_CheckExact(__pyx_t_3))) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 133, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_6 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); - index = 0; __pyx_t_1 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_6), 2) < 0) __PYX_ERR(0, 133, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 133, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_ret_val = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_img0 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":134 - * self.mode = 'video' - * ret_val, img0 = self.cap.read() - * while not ret_val: # <<<<<<<<<<<<<< - * self.count += 1 - * self.cap.release() - */ - while (1) { - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_ret_val); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 134, __pyx_L1_error) - __pyx_t_8 = ((!__pyx_t_4) != 0); - if (!__pyx_t_8) break; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":135 - * ret_val, img0 = self.cap.read() - * while not ret_val: - * self.count += 1 # <<<<<<<<<<<<<< - * self.cap.release() - * if self.count == self.nf: # last video - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_count); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_count, __pyx_t_2) < 0) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":136 - * while not ret_val: - * self.count += 1 - * self.cap.release() # <<<<<<<<<<<<<< - * if self.count == self.nf: # last video - * raise StopIteration - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_cap); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_release); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 0+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":137 - * self.count += 1 - * self.cap.release() - * if self.count == self.nf: # last video # <<<<<<<<<<<<<< - * raise StopIteration - * else: - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_count); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_nf); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_2, __pyx_t_1, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(__pyx_t_8)) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":138 - * self.cap.release() - * if self.count == self.nf: # last video - * raise StopIteration # <<<<<<<<<<<<<< - * else: - * path = self.files[self.count] - */ - __Pyx_Raise(__pyx_builtin_StopIteration, 0, 0, 0); - __PYX_ERR(0, 138, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":137 - * self.count += 1 - * self.cap.release() - * if self.count == self.nf: # last video # <<<<<<<<<<<<<< - * raise StopIteration - * else: - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":140 - * raise StopIteration - * else: - * path = self.files[self.count] # <<<<<<<<<<<<<< - * self.new_video(path) - * ret_val, img0 = self.cap.read() - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_files); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_count); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_path, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":141 - * else: - * path = self.files[self.count] - * self.new_video(path) # <<<<<<<<<<<<<< - * ret_val, img0 = self.cap.read() - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_new_video); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_path}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":142 - * path = self.files[self.count] - * self.new_video(path) - * ret_val, img0 = self.cap.read() # <<<<<<<<<<<<<< - * - * self.frame += 1 - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_cap); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_read); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_1, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 0+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 142, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_6 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); - index = 0; __pyx_t_3 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_3)) goto __pyx_L10_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_1 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_1)) goto __pyx_L10_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_6), 2) < 0) __PYX_ERR(0, 142, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L11_unpacking_done; - __pyx_L10_unpacking_failed:; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 142, __pyx_L1_error) - __pyx_L11_unpacking_done:; - } - __Pyx_DECREF_SET(__pyx_v_ret_val, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_img0, __pyx_t_1); - __pyx_t_1 = 0; - } - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":144 - * ret_val, img0 = self.cap.read() - * - * self.frame += 1 # <<<<<<<<<<<<<< - * s = f'video {self.count + 1}/{self.nf} ({self.frame}/{self.frames}) {path}: ' - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_frame); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_t_2, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_frame, __pyx_t_1) < 0) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":145 - * - * self.frame += 1 - * s = f'video {self.count + 1}/{self.nf} ({self.frame}/{self.frames}) {path}: ' # <<<<<<<<<<<<<< - * - * else: - */ - __pyx_t_1 = PyTuple_New(11); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = 0; - __pyx_t_10 = 127; - __Pyx_INCREF(__pyx_kp_u_video_2); - __pyx_t_9 += 6; - __Pyx_GIVEREF(__pyx_kp_u_video_2); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_kp_u_video_2); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_count); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_AddObjC(__pyx_t_2, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_FormatSimple(__pyx_t_3, __pyx_empty_unicode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_10 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) > __pyx_t_10) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) : __pyx_t_10; - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_INCREF(__pyx_kp_u__6); - __pyx_t_9 += 1; - __Pyx_GIVEREF(__pyx_kp_u__6); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_kp_u__6); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_nf); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_FormatSimple(__pyx_t_2, __pyx_empty_unicode); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_10 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) > __pyx_t_10) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) : __pyx_t_10; - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_1, 3, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_kp_u__7); - __pyx_t_9 += 2; - __Pyx_GIVEREF(__pyx_kp_u__7); - PyTuple_SET_ITEM(__pyx_t_1, 4, __pyx_kp_u__7); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_frame); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_FormatSimple(__pyx_t_3, __pyx_empty_unicode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_10 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) > __pyx_t_10) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_2) : __pyx_t_10; - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_1, 5, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_INCREF(__pyx_kp_u__6); - __pyx_t_9 += 1; - __Pyx_GIVEREF(__pyx_kp_u__6); - PyTuple_SET_ITEM(__pyx_t_1, 6, __pyx_kp_u__6); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_frames); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_FormatSimple(__pyx_t_2, __pyx_empty_unicode); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_10 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) > __pyx_t_10) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) : __pyx_t_10; - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_1, 7, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_kp_u__8); - __pyx_t_9 += 2; - __Pyx_GIVEREF(__pyx_kp_u__8); - PyTuple_SET_ITEM(__pyx_t_1, 8, __pyx_kp_u__8); - __pyx_t_3 = __Pyx_PyObject_FormatSimple(__pyx_v_path, __pyx_empty_unicode); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_10 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) > __pyx_t_10) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) : __pyx_t_10; - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_1, 9, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_kp_u__9); - __pyx_t_9 += 2; - __Pyx_GIVEREF(__pyx_kp_u__9); - PyTuple_SET_ITEM(__pyx_t_1, 10, __pyx_kp_u__9); - __pyx_t_3 = __Pyx_PyUnicode_Join(__pyx_t_1, 11, __pyx_t_9, __pyx_t_10); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 145, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_s = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":130 - * path = self.files[self.count] - * - * if self.video_flag[self.count]: # <<<<<<<<<<<<<< - * # Read video - * self.mode = 'video' - */ - goto __pyx_L4; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":149 - * else: - * # Read image - * self.count += 1 # <<<<<<<<<<<<<< - * img0 = cv2.imread(path) # BGR - * assert img0 is not None, f'Image Not Found {path}' - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_count); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_count, __pyx_t_1) < 0) __PYX_ERR(0, 149, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":150 - * # Read image - * self.count += 1 - * img0 = cv2.imread(path) # BGR # <<<<<<<<<<<<<< - * assert img0 is not None, f'Image Not Found {path}' - * s = f'image {self.count}/{self.nf} {path}: ' - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_cv2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_imread); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_path}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_v_img0 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":151 - * self.count += 1 - * img0 = cv2.imread(path) # BGR - * assert img0 is not None, f'Image Not Found {path}' # <<<<<<<<<<<<<< - * s = f'image {self.count}/{self.nf} {path}: ' - * - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - __pyx_t_8 = (__pyx_v_img0 != Py_None); - __pyx_t_4 = (__pyx_t_8 != 0); - if (unlikely(!__pyx_t_4)) { - __pyx_t_1 = __Pyx_PyObject_FormatSimple(__pyx_v_path, __pyx_empty_unicode); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Image_Not_Found, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_builtin_AssertionError, __pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(0, 151, __pyx_L1_error) - } - } - #else - if ((1)); else __PYX_ERR(0, 151, __pyx_L1_error) - #endif - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":152 - * img0 = cv2.imread(path) # BGR - * assert img0 is not None, f'Image Not Found {path}' - * s = f'image {self.count}/{self.nf} {path}: ' # <<<<<<<<<<<<<< - * - * # Padded resize - */ - __pyx_t_2 = PyTuple_New(7); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_9 = 0; - __pyx_t_10 = 127; - __Pyx_INCREF(__pyx_kp_u_image_2); - __pyx_t_9 += 6; - __Pyx_GIVEREF(__pyx_kp_u_image_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_kp_u_image_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_count); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_FormatSimple(__pyx_t_1, __pyx_empty_unicode); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_10 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) > __pyx_t_10) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_3) : __pyx_t_10; - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_kp_u__6); - __pyx_t_9 += 1; - __Pyx_GIVEREF(__pyx_kp_u__6); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_kp_u__6); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_nf); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_FormatSimple(__pyx_t_3, __pyx_empty_unicode); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_10 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_1) > __pyx_t_10) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_1) : __pyx_t_10; - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_INCREF(__pyx_kp_u__10); - __pyx_t_9 += 1; - __Pyx_GIVEREF(__pyx_kp_u__10); - PyTuple_SET_ITEM(__pyx_t_2, 4, __pyx_kp_u__10); - __pyx_t_1 = __Pyx_PyObject_FormatSimple(__pyx_v_path, __pyx_empty_unicode); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_1) > __pyx_t_10) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_1) : __pyx_t_10; - __pyx_t_9 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 5, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_INCREF(__pyx_kp_u__9); - __pyx_t_9 += 2; - __Pyx_GIVEREF(__pyx_kp_u__9); - PyTuple_SET_ITEM(__pyx_t_2, 6, __pyx_kp_u__9); - __pyx_t_1 = __Pyx_PyUnicode_Join(__pyx_t_2, 7, __pyx_t_9, __pyx_t_10); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_s = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - } - __pyx_L4:; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":155 - * - * # Padded resize - * img = letterbox(img0, self.img_size, stride=self.stride, auto=self.auto)[0] # <<<<<<<<<<<<<< - * - * # Convert - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_letterbox); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_img_size); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_img0); - __Pyx_GIVEREF(__pyx_v_img0); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_img0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_stride, __pyx_t_6) < 0) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_auto); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_auto, __pyx_t_6) < 0) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_6, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 155, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_img = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":158 - * - * # Convert - * img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB # <<<<<<<<<<<<<< - * img = np.ascontiguousarray(img) - * - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_img, __pyx_n_s_transpose); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 158, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_tuple__11}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 158, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = __Pyx_PyObject_GetItem(__pyx_t_2, __pyx_slice__12); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 158, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF_SET(__pyx_v_img, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":159 - * # Convert - * img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB - * img = np.ascontiguousarray(img) # <<<<<<<<<<<<<< - * - * return path, img, img0, self.cap, s - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_img}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF_SET(__pyx_v_img, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":161 - * img = np.ascontiguousarray(img) - * - * return path, img, img0, self.cap, s # <<<<<<<<<<<<<< - * - * def new_video(self, path): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_cap); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyTuple_New(5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_path); - __Pyx_GIVEREF(__pyx_v_path); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_path); - __Pyx_INCREF(__pyx_v_img); - __Pyx_GIVEREF(__pyx_v_img); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_img); - __Pyx_INCREF(__pyx_v_img0); - __Pyx_GIVEREF(__pyx_v_img0); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_img0); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_t_6); - __Pyx_INCREF(__pyx_v_s); - __Pyx_GIVEREF(__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_3, 4, __pyx_v_s); - __pyx_t_6 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":125 - * return self - * - * def __next__(self): # <<<<<<<<<<<<<< - * if self.count == self.nf: - * raise StopIteration - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_path); - __Pyx_XDECREF(__pyx_v_ret_val); - __Pyx_XDECREF(__pyx_v_img0); - __Pyx_XDECREF(__pyx_v_s); - __Pyx_XDECREF(__pyx_v_img); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":163 - * return path, img, img0, self.cap, s - * - * def new_video(self, path): # <<<<<<<<<<<<<< - * self.frame = 0 - * self.cap = cv2.VideoCapture(path) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_7new_video(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_7new_video = {"new_video", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_7new_video, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_7new_video(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_path = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("new_video (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_path,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_path,0}; - #endif - PyObject* values[2] = {0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 163, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_path)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 163, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("new_video", 1, 2, 2, 1); __PYX_ERR(0, 163, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "new_video") < 0)) __PYX_ERR(0, 163, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_path = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("new_video", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 163, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.new_video", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_6new_video(__pyx_self, __pyx_v_self, __pyx_v_path); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_6new_video(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_path) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("new_video", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":164 - * - * def new_video(self, path): - * self.frame = 0 # <<<<<<<<<<<<<< - * self.cap = cv2.VideoCapture(path) - * self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_frame, __pyx_int_0) < 0) __PYX_ERR(0, 164, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":165 - * def new_video(self, path): - * self.frame = 0 - * self.cap = cv2.VideoCapture(path) # <<<<<<<<<<<<<< - * self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_cv2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_VideoCapture); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_path}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_cap, __pyx_t_1) < 0) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":166 - * self.frame = 0 - * self.cap = cv2.VideoCapture(path) - * self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_cap); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_get); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_cv2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_CAP_PROP_FRAME_COUNT); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_5}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = __Pyx_PyNumber_Int(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_frames, __pyx_t_2) < 0) __PYX_ERR(0, 166, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":163 - * return path, img, img0, self.cap, s - * - * def new_video(self, path): # <<<<<<<<<<<<<< - * self.frame = 0 - * self.cap = cv2.VideoCapture(path) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.new_video", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":168 - * self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self.nf # number of files - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_9__len__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_9__len__ = {"__len__", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_9__len__, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_9__len__(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,0}; - #endif - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 168, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "__len__") < 0)) __PYX_ERR(0, 168, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_self = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__len__", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 168, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.__len__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_8__len__(__pyx_self, __pyx_v_self); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_8__len__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__len__", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":169 - * - * def __len__(self): - * return self.nf # number of files # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_nf); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 169, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":168 - * self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self.nf # number of files - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.LoadImages.__len__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":172 - * - * - * def img2label_paths(img_paths): # <<<<<<<<<<<<<< - * # Define label paths as a function of image paths - * sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_7img2label_paths(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_7img2label_paths = {"img2label_paths", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_7img2label_paths, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_7img2label_paths(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_img_paths = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("img2label_paths (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_img_paths,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_img_paths,0}; - #endif - PyObject* values[1] = {0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_img_paths)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 172, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "img2label_paths") < 0)) __PYX_ERR(0, 172, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_img_paths = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("img2label_paths", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 172, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.img2label_paths", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_6img2label_paths(__pyx_self, __pyx_v_img_paths); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_6img2label_paths(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_img_paths) { - PyObject *__pyx_v_sa = NULL; - PyObject *__pyx_v_sb = NULL; - PyObject *__pyx_8genexpr3__pyx_v_x = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("img2label_paths", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":174 - * def img2label_paths(img_paths): - * # Define label paths as a function of image paths - * sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings # <<<<<<<<<<<<<< - * return [sb.join(x.rsplit(sa, 1)).rsplit('.', 1)[0] + '.txt' for x in img_paths] - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_os); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_sep); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_n_u_images); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_os); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_sep); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_os); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_sep); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_1, __pyx_n_u_labels); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_os); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_sep); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 174, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_v_sa = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_sb = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":175 - * # Define label paths as a function of image paths - * sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - * return [sb.join(x.rsplit(sa, 1)).rsplit('.', 1)[0] + '.txt' for x in img_paths] # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(PyList_CheckExact(__pyx_v_img_paths)) || PyTuple_CheckExact(__pyx_v_img_paths)) { - __pyx_t_2 = __pyx_v_img_paths; __Pyx_INCREF(__pyx_t_2); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_img_paths); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 175, __pyx_L5_error) - } - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_4); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 175, __pyx_L5_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_4); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 175, __pyx_L5_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_6(__pyx_t_2); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 175, __pyx_L5_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XDECREF_SET(__pyx_8genexpr3__pyx_v_x, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_sb, __pyx_n_s_join); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr3__pyx_v_x, __pyx_n_s_rsplit); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_9 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_9)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_9, __pyx_v_sa, __pyx_int_1}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_10, 2+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __pyx_t_8 = NULL; - __pyx_t_10 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_10 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_8, __pyx_t_7}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_10, 1+__pyx_t_10); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_rsplit); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_tuple__13, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetItemInt(__pyx_t_4, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Add(__pyx_t_3, __pyx_kp_u_txt); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_4))) __PYX_ERR(0, 175, __pyx_L5_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_x); __pyx_8genexpr3__pyx_v_x = 0; - goto __pyx_L8_exit_scope; - __pyx_L5_error:; - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_x); __pyx_8genexpr3__pyx_v_x = 0; - goto __pyx_L1_error; - __pyx_L8_exit_scope:; - } /* exit inner scope */ - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":172 - * - * - * def img2label_paths(img_paths): # <<<<<<<<<<<<<< - * # Define label paths as a function of image paths - * sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.img2label_paths", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_sa); - __Pyx_XDECREF(__pyx_v_sb); - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_x); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":179 - * - * # Ancillary functions -------------------------------------------------------------------------------------------------- - * def load_image(self, i): # <<<<<<<<<<<<<< - * # loads 1 image from dataset index 'i', returns im, original hw, resized hw - * im = self.imgs[i] - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9load_image(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9load_image = {"load_image", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9load_image, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9load_image(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_i = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("load_image (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_i,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_i,0}; - #endif - PyObject* values[2] = {0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 179, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_i)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 179, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("load_image", 1, 2, 2, 1); __PYX_ERR(0, 179, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "load_image") < 0)) __PYX_ERR(0, 179, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_i = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("load_image", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 179, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.load_image", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8load_image(__pyx_self, __pyx_v_self, __pyx_v_i); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_8load_image(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_i) { - PyObject *__pyx_v_im = NULL; - PyObject *__pyx_v_npy = NULL; - PyObject *__pyx_v_path = NULL; - PyObject *__pyx_v_h0 = NULL; - PyObject *__pyx_v_w0 = NULL; - PyObject *__pyx_v_r = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("load_image", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":181 - * def load_image(self, i): - * # loads 1 image from dataset index 'i', returns im, original hw, resized hw - * im = self.imgs[i] # <<<<<<<<<<<<<< - * if im is None: # not cached in ram - * npy = self.img_npy[i] - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_imgs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 181, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_i); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 181, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_im = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":182 - * # loads 1 image from dataset index 'i', returns im, original hw, resized hw - * im = self.imgs[i] - * if im is None: # not cached in ram # <<<<<<<<<<<<<< - * npy = self.img_npy[i] - * if npy and npy.exists(): # load npy - */ - __pyx_t_3 = (__pyx_v_im == Py_None); - __pyx_t_4 = (__pyx_t_3 != 0); - if (__pyx_t_4) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":183 - * im = self.imgs[i] - * if im is None: # not cached in ram - * npy = self.img_npy[i] # <<<<<<<<<<<<<< - * if npy and npy.exists(): # load npy - * im = np.load(npy) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_img_npy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_t_2, __pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_npy = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":184 - * if im is None: # not cached in ram - * npy = self.img_npy[i] - * if npy and npy.exists(): # load npy # <<<<<<<<<<<<<< - * im = np.load(npy) - * else: # read image - */ - __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_v_npy); if (unlikely((__pyx_t_3 < 0))) __PYX_ERR(0, 184, __pyx_L1_error) - if (__pyx_t_3) { - } else { - __pyx_t_4 = __pyx_t_3; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_npy, __pyx_n_s_exists); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_5, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_6, 0+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_3 < 0))) __PYX_ERR(0, 184, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_4 = __pyx_t_3; - __pyx_L5_bool_binop_done:; - if (__pyx_t_4) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":185 - * npy = self.img_npy[i] - * if npy and npy.exists(): # load npy - * im = np.load(npy) # <<<<<<<<<<<<<< - * else: # read image - * path = self.img_files[i] - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_load); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_npy}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_6, 1+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF_SET(__pyx_v_im, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":184 - * if im is None: # not cached in ram - * npy = self.img_npy[i] - * if npy and npy.exists(): # load npy # <<<<<<<<<<<<<< - * im = np.load(npy) - * else: # read image - */ - goto __pyx_L4; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":187 - * im = np.load(npy) - * else: # read image - * path = self.img_files[i] # <<<<<<<<<<<<<< - * im = cv2.imread(path) # BGR - * assert im is not None, f'Image Not Found {path}' - */ - /*else*/ { - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_img_files); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_i); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 187, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_path = __pyx_t_5; - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":188 - * else: # read image - * path = self.img_files[i] - * im = cv2.imread(path) # BGR # <<<<<<<<<<<<<< - * assert im is not None, f'Image Not Found {path}' - * h0, w0 = im.shape[:2] # orig hw - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_cv2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 188, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_imread); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 188, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_v_path}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_6, 1+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 188, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF_SET(__pyx_v_im, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":189 - * path = self.img_files[i] - * im = cv2.imread(path) # BGR - * assert im is not None, f'Image Not Found {path}' # <<<<<<<<<<<<<< - * h0, w0 = im.shape[:2] # orig hw - * r = self.img_size / max(h0, w0) # ratio - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - __pyx_t_4 = (__pyx_v_im != Py_None); - __pyx_t_3 = (__pyx_t_4 != 0); - if (unlikely(!__pyx_t_3)) { - __pyx_t_5 = __Pyx_PyObject_FormatSimple(__pyx_v_path, __pyx_empty_unicode); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Image_Not_Found, __pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_builtin_AssertionError, __pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(0, 189, __pyx_L1_error) - } - } - #else - if ((1)); else __PYX_ERR(0, 189, __pyx_L1_error) - #endif - } - __pyx_L4:; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":190 - * im = cv2.imread(path) # BGR - * assert im is not None, f'Image Not Found {path}' - * h0, w0 = im.shape[:2] # orig hw # <<<<<<<<<<<<<< - * r = self.img_size / max(h0, w0) # ratio - * if r != 1: # if sizes are not equal - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_im, __pyx_n_s_shape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 190, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyObject_GetSlice(__pyx_t_2, 0, 2, NULL, NULL, &__pyx_slice__14, 0, 1, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 190, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_5))) || (PyList_CheckExact(__pyx_t_5))) { - PyObject* sequence = __pyx_t_5; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 190, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 190, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 190, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_7 = PyObject_GetIter(__pyx_t_5); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 190, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - index = 0; __pyx_t_2 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 2) < 0) __PYX_ERR(0, 190, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 190, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_h0 = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_w0 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":191 - * assert im is not None, f'Image Not Found {path}' - * h0, w0 = im.shape[:2] # orig hw - * r = self.img_size / max(h0, w0) # ratio # <<<<<<<<<<<<<< - * if r != 1: # if sizes are not equal - * im = cv2.resize(im, (int(w0 * r), int(h0 * r)), - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_img_size); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 191, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_w0); - __pyx_t_1 = __pyx_v_w0; - __Pyx_INCREF(__pyx_v_h0); - __pyx_t_2 = __pyx_v_h0; - __pyx_t_9 = PyObject_RichCompare(__pyx_t_1, __pyx_t_2, Py_GT); __Pyx_XGOTREF(__pyx_t_9); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 191, __pyx_L1_error) - __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely((__pyx_t_3 < 0))) __PYX_ERR(0, 191, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_3) { - __Pyx_INCREF(__pyx_t_1); - __pyx_t_7 = __pyx_t_1; - } else { - __Pyx_INCREF(__pyx_t_2); - __pyx_t_7 = __pyx_t_2; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_5, __pyx_t_7); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 191, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_v_r = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":192 - * h0, w0 = im.shape[:2] # orig hw - * r = self.img_size / max(h0, w0) # ratio - * if r != 1: # if sizes are not equal # <<<<<<<<<<<<<< - * im = cv2.resize(im, (int(w0 * r), int(h0 * r)), - * interpolation=cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR) - */ - __pyx_t_1 = __Pyx_PyInt_NeObjC(__pyx_v_r, __pyx_int_1, 1, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_3 < 0))) __PYX_ERR(0, 192, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_3) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":193 - * r = self.img_size / max(h0, w0) # ratio - * if r != 1: # if sizes are not equal - * im = cv2.resize(im, (int(w0 * r), int(h0 * r)), # <<<<<<<<<<<<<< - * interpolation=cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR) - * return im, (h0, w0), im.shape[:2] # im, hw_original, hw_resized - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_cv2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_resize); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_v_w0, __pyx_v_r); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyNumber_Int(__pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_v_h0, __pyx_v_r); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyNumber_Int(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_2); - __pyx_t_5 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_im); - __Pyx_GIVEREF(__pyx_v_im); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_im); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":194 - * if r != 1: # if sizes are not equal - * im = cv2.resize(im, (int(w0 * r), int(h0 * r)), - * interpolation=cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR) # <<<<<<<<<<<<<< - * return im, (h0, w0), im.shape[:2] # im, hw_original, hw_resized - * else: - */ - __pyx_t_1 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = PyObject_RichCompare(__pyx_v_r, __pyx_int_1, Py_LT); __Pyx_XGOTREF(__pyx_t_9); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 194, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_4) { - } else { - __pyx_t_3 = __pyx_t_4; - goto __pyx_L10_bool_binop_done; - } - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_augment); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_10 = ((!__pyx_t_4) != 0); - __pyx_t_3 = __pyx_t_10; - __pyx_L10_bool_binop_done:; - if (__pyx_t_3) { - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_cv2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_9, __pyx_n_s_INTER_AREA); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_5 = __pyx_t_11; - __pyx_t_11 = 0; - } else { - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_cv2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_INTER_LINEAR); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_5 = __pyx_t_9; - __pyx_t_9 = 0; - } - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_interpolation, __pyx_t_5) < 0) __PYX_ERR(0, 194, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":193 - * r = self.img_size / max(h0, w0) # ratio - * if r != 1: # if sizes are not equal - * im = cv2.resize(im, (int(w0 * r), int(h0 * r)), # <<<<<<<<<<<<<< - * interpolation=cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR) - * return im, (h0, w0), im.shape[:2] # im, hw_original, hw_resized - */ - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_im, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":192 - * h0, w0 = im.shape[:2] # orig hw - * r = self.img_size / max(h0, w0) # ratio - * if r != 1: # if sizes are not equal # <<<<<<<<<<<<<< - * im = cv2.resize(im, (int(w0 * r), int(h0 * r)), - * interpolation=cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR) - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":195 - * im = cv2.resize(im, (int(w0 * r), int(h0 * r)), - * interpolation=cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR) - * return im, (h0, w0), im.shape[:2] # im, hw_original, hw_resized # <<<<<<<<<<<<<< - * else: - * return self.imgs[i], self.img_hw0[i], self.img_hw[i] # im, hw_original, hw_resized - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_h0); - __Pyx_GIVEREF(__pyx_v_h0); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_h0); - __Pyx_INCREF(__pyx_v_w0); - __Pyx_GIVEREF(__pyx_v_w0); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_v_w0); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_im, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetSlice(__pyx_t_1, 0, 2, NULL, NULL, &__pyx_slice__14, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 195, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_im); - __Pyx_GIVEREF(__pyx_v_im); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_im); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_t_2); - __pyx_t_5 = 0; - __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":182 - * # loads 1 image from dataset index 'i', returns im, original hw, resized hw - * im = self.imgs[i] - * if im is None: # not cached in ram # <<<<<<<<<<<<<< - * npy = self.img_npy[i] - * if npy and npy.exists(): # load npy - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":197 - * return im, (h0, w0), im.shape[:2] # im, hw_original, hw_resized - * else: - * return self.imgs[i], self.img_hw0[i], self.img_hw[i] # im, hw_original, hw_resized # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_imgs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_i); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_img_hw0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_i); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_img_hw); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_i); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 197, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_t_7); - __pyx_t_2 = 0; - __pyx_t_5 = 0; - __pyx_t_7 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":179 - * - * # Ancillary functions -------------------------------------------------------------------------------------------------- - * def load_image(self, i): # <<<<<<<<<<<<<< - * # loads 1 image from dataset index 'i', returns im, original hw, resized hw - * im = self.imgs[i] - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.load_image", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_im); - __Pyx_XDECREF(__pyx_v_npy); - __Pyx_XDECREF(__pyx_v_path); - __Pyx_XDECREF(__pyx_v_h0); - __Pyx_XDECREF(__pyx_v_w0); - __Pyx_XDECREF(__pyx_v_r); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":200 - * - * - * def load_mosaic(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - * labels4, segments4 = [], [] - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic = {"load_mosaic", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_index = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("load_mosaic (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_index,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_index,0}; - #endif - PyObject* values[2] = {0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 200, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_index)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 200, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("load_mosaic", 1, 2, 2, 1); __PYX_ERR(0, 200, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "load_mosaic") < 0)) __PYX_ERR(0, 200, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_index = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("load_mosaic", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 200, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.load_mosaic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10load_mosaic(__pyx_self, __pyx_v_self, __pyx_v_index); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic_2generator1(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":204 - * labels4, segments4 = [], [] - * s = self.img_size - * yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y # <<<<<<<<<<<<<< - * indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - * random.shuffle(indices) - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic_genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 204, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic_2generator1, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_load_mosaic_locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils); if (unlikely(!gen)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.load_mosaic.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic_2generator1(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 204, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_self)) { __Pyx_RaiseClosureNameError("self"); __PYX_ERR(0, 204, __pyx_L1_error) } - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_self, __pyx_n_s_mosaic_border); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 204, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 204, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 204, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 204, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_random); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_uniform); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyNumber_Negative(__pyx_cur_scope->__pyx_v_x); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_s)) { __Pyx_RaiseClosureNameError("s"); __PYX_ERR(0, 204, __pyx_L1_error) } - __pyx_t_7 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_cur_scope->__pyx_outer_scope->__pyx_v_s, 2, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = PyNumber_Add(__pyx_t_7, __pyx_cur_scope->__pyx_v_x); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = NULL; - __pyx_t_9 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_9 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_7, __pyx_t_5, __pyx_t_8}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_9, 2+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = __Pyx_PyNumber_Int(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - __Pyx_XGIVEREF(__pyx_t_2); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_3; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_4; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_2 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_2); - __pyx_t_3 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_4 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 204, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":200 - * - * - * def load_mosaic(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - * labels4, segments4 = [], [] - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10load_mosaic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *__pyx_cur_scope; - PyObject *__pyx_v_labels4 = NULL; - PyObject *__pyx_v_segments4 = NULL; - PyObject *__pyx_v_yc = NULL; - PyObject *__pyx_v_xc = NULL; - PyObject *__pyx_v_indices = NULL; - PyObject *__pyx_v_i = NULL; - PyObject *__pyx_v_img = NULL; - CYTHON_UNUSED PyObject *__pyx_v__ = NULL; - PyObject *__pyx_v_h = NULL; - PyObject *__pyx_v_w = NULL; - PyObject *__pyx_v_img4 = NULL; - PyObject *__pyx_v_x1a = NULL; - PyObject *__pyx_v_y1a = NULL; - PyObject *__pyx_v_x2a = NULL; - PyObject *__pyx_v_y2a = NULL; - PyObject *__pyx_v_x1b = NULL; - PyObject *__pyx_v_y1b = NULL; - PyObject *__pyx_v_x2b = NULL; - PyObject *__pyx_v_y2b = NULL; - PyObject *__pyx_v_padw = NULL; - PyObject *__pyx_v_padh = NULL; - PyObject *__pyx_v_labels = NULL; - PyObject *__pyx_v_segments = NULL; - PyObject *__pyx_v_x = NULL; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic_2generator1 = 0; - PyObject *__pyx_8genexpr5__pyx_v_x = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *(*__pyx_t_5)(PyObject *); - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - Py_ssize_t __pyx_t_8; - PyObject *(*__pyx_t_9)(PyObject *); - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - PyObject *__pyx_t_13 = NULL; - int __pyx_t_14; - long __pyx_t_15; - Py_ssize_t __pyx_t_16; - PyObject *(*__pyx_t_17)(PyObject *); - int __pyx_t_18; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("load_mosaic", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 200, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_self = __pyx_v_self; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_self); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_self); - __Pyx_INCREF(__pyx_v_index); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":202 - * def load_mosaic(self, index): - * # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - * labels4, segments4 = [], [] # <<<<<<<<<<<<<< - * s = self.img_size - * yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y - */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 202, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 202, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_labels4 = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_segments4 = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":203 - * # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - * labels4, segments4 = [], [] - * s = self.img_size # <<<<<<<<<<<<<< - * yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y - * indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_img_size); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_cur_scope->__pyx_v_s = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":204 - * labels4, segments4 = [], [] - * s = self.img_size - * yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y # <<<<<<<<<<<<<< - * indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - * random.shuffle(indices) - */ - __pyx_t_2 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic_genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 204, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 204, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 204, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_yc = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_xc = __pyx_t_3; - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":205 - * s = self.img_size - * yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y - * indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices # <<<<<<<<<<<<<< - * random.shuffle(indices) - * for i, index in enumerate(indices): - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_v_index); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_random); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_choices); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_k, __pyx_int_3) < 0) __PYX_ERR(0, 205, __pyx_L1_error) - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_2, __pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_indices = __pyx_t_3; - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":206 - * yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y - * indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - * random.shuffle(indices) # <<<<<<<<<<<<<< - * for i, index in enumerate(indices): - * # Load image - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_random); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_shuffle); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_v_indices}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 206, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":207 - * indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - * random.shuffle(indices) - * for i, index in enumerate(indices): # <<<<<<<<<<<<<< - * # Load image - * img, _, (h, w) = load_image(self, index) - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_3 = __pyx_int_0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_2 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_2); __pyx_t_8 = 0; - __pyx_t_9 = NULL; - } else { - __pyx_t_8 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 207, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_9 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 207, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_9)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_6 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_8); __Pyx_INCREF(__pyx_t_6); __pyx_t_8++; if (unlikely((0 < 0))) __PYX_ERR(0, 207, __pyx_L1_error) - #else - __pyx_t_6 = PySequence_ITEM(__pyx_t_2, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 207, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - } else { - if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_6 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_8); __Pyx_INCREF(__pyx_t_6); __pyx_t_8++; if (unlikely((0 < 0))) __PYX_ERR(0, 207, __pyx_L1_error) - #else - __pyx_t_6 = PySequence_ITEM(__pyx_t_2, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 207, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - } - } else { - __pyx_t_6 = __pyx_t_9(__pyx_t_2); - if (unlikely(!__pyx_t_6)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 207, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_6); - } - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_INCREF(__pyx_t_3); - __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_3); - __pyx_t_6 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 207, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_t_6; - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":209 - * for i, index in enumerate(indices): - * # Load image - * img, _, (h, w) = load_image(self, index) # <<<<<<<<<<<<<< - * - * # place img in img4 - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_load_image); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_1, __pyx_cur_scope->__pyx_v_self, __pyx_v_index}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_7, 2+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_6))) || (PyList_CheckExact(__pyx_t_6))) { - PyObject* sequence = __pyx_t_6; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 209, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_10 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_4 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - __pyx_t_10 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_10); - #else - __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - #endif - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_11 = PyObject_GetIter(__pyx_t_6); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_11); - index = 0; __pyx_t_4 = __pyx_t_5(__pyx_t_11); if (unlikely(!__pyx_t_4)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_4); - index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_11); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 2; __pyx_t_10 = __pyx_t_5(__pyx_t_11); if (unlikely(!__pyx_t_10)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_10); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_11), 3) < 0) __PYX_ERR(0, 209, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 209, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_img, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v__, __pyx_t_1); - __pyx_t_1 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_10))) || (PyList_CheckExact(__pyx_t_10))) { - PyObject* sequence = __pyx_t_10; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 209, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_11 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_12 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_11 = PyList_GET_ITEM(sequence, 0); - __pyx_t_12 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(__pyx_t_12); - #else - __pyx_t_11 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_13 = PyObject_GetIter(__pyx_t_10); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_13); - index = 0; __pyx_t_11 = __pyx_t_5(__pyx_t_13); if (unlikely(!__pyx_t_11)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_11); - index = 1; __pyx_t_12 = __pyx_t_5(__pyx_t_13); if (unlikely(!__pyx_t_12)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_12); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_13), 2) < 0) __PYX_ERR(0, 209, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 209, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_h, __pyx_t_11); - __pyx_t_11 = 0; - __Pyx_XDECREF_SET(__pyx_v_w, __pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":212 - * - * # place img in img4 - * if i == 0: # top left # <<<<<<<<<<<<<< - * img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - * x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - */ - __pyx_t_6 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_0, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 212, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 212, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":213 - * # place img in img4 - * if i == 0: # top left - * img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles # <<<<<<<<<<<<<< - * x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - * x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_full); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyInt_MultiplyObjC(__pyx_cur_scope->__pyx_v_s, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = __Pyx_PyInt_MultiplyObjC(__pyx_cur_scope->__pyx_v_s, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_img, __pyx_n_s_shape); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_12 = __Pyx_GetItemInt(__pyx_t_4, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_12); - __pyx_t_6 = 0; - __pyx_t_1 = 0; - __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_4); - __Pyx_INCREF(__pyx_int_114); - __Pyx_GIVEREF(__pyx_int_114); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_int_114); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_uint8); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_6) < 0) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_10, __pyx_t_12, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_img4, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":214 - * if i == 0: # top left - * img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - * x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) # <<<<<<<<<<<<<< - * x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - * elif i == 1: # top right - */ - __pyx_t_15 = 0; - __pyx_t_6 = PyNumber_Subtract(__pyx_v_xc, __pyx_v_w); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_12 = __Pyx_PyInt_From_long(__pyx_t_15); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_12, __pyx_t_6, Py_GT); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (__pyx_t_14) { - __pyx_t_10 = __Pyx_PyInt_From_long(__pyx_t_15); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_4 = __pyx_t_10; - __pyx_t_10 = 0; - } else { - __Pyx_INCREF(__pyx_t_6); - __pyx_t_4 = __pyx_t_6; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __pyx_t_4; - __Pyx_INCREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_15 = 0; - __pyx_t_4 = PyNumber_Subtract(__pyx_v_yc, __pyx_v_h); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_12 = __Pyx_PyInt_From_long(__pyx_t_15); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_1 = PyObject_RichCompare(__pyx_t_12, __pyx_t_4, Py_GT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_14) { - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_t_15); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = __pyx_t_1; - __pyx_t_1 = 0; - } else { - __Pyx_INCREF(__pyx_t_4); - __pyx_t_10 = __pyx_t_4; - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __pyx_t_10; - __Pyx_INCREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __pyx_v_xc; - __Pyx_INCREF(__pyx_t_10); - __pyx_t_1 = __pyx_v_yc; - __Pyx_INCREF(__pyx_t_1); - __Pyx_XDECREF_SET(__pyx_v_x1a, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_XDECREF_SET(__pyx_v_y1a, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_x2a, __pyx_t_10); - __pyx_t_10 = 0; - __Pyx_XDECREF_SET(__pyx_v_y2a, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":215 - * img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - * x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - * x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) # <<<<<<<<<<<<<< - * elif i == 1: # top right - * x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_x2a, __pyx_v_x1a); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = PyNumber_Subtract(__pyx_v_w, __pyx_t_1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_v_y2a, __pyx_v_y1a); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Subtract(__pyx_v_h, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 215, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __pyx_v_w; - __Pyx_INCREF(__pyx_t_1); - __pyx_t_6 = __pyx_v_h; - __Pyx_INCREF(__pyx_t_6); - __Pyx_XDECREF_SET(__pyx_v_x1b, __pyx_t_10); - __pyx_t_10 = 0; - __Pyx_XDECREF_SET(__pyx_v_y1b, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_x2b, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_y2b, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":212 - * - * # place img in img4 - * if i == 0: # top left # <<<<<<<<<<<<<< - * img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - * x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - */ - goto __pyx_L11; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":216 - * x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - * x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - * elif i == 1: # top right # <<<<<<<<<<<<<< - * x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - * x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - */ - __pyx_t_6 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_1, 1, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 216, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 216, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":217 - * x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - * elif i == 1: # top right - * x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc # <<<<<<<<<<<<<< - * x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - * elif i == 2: # bottom left - */ - __pyx_t_6 = __pyx_v_xc; - __Pyx_INCREF(__pyx_t_6); - __pyx_t_15 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_v_yc, __pyx_v_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = __Pyx_PyInt_From_long(__pyx_t_15); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_12 = PyObject_RichCompare(__pyx_t_10, __pyx_t_1, Py_GT); __Pyx_XGOTREF(__pyx_t_12); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_14) { - __pyx_t_12 = __Pyx_PyInt_From_long(__pyx_t_15); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_4 = __pyx_t_12; - __pyx_t_12 = 0; - } else { - __Pyx_INCREF(__pyx_t_1); - __pyx_t_4 = __pyx_t_1; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __pyx_t_4; - __Pyx_INCREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyInt_MultiplyObjC(__pyx_cur_scope->__pyx_v_s, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_12 = PyNumber_Add(__pyx_v_xc, __pyx_v_w); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_4, __pyx_t_12, Py_LT); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 217, __pyx_L1_error) - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 217, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (__pyx_t_14) { - __Pyx_INCREF(__pyx_t_4); - __pyx_t_10 = __pyx_t_4; - } else { - __Pyx_INCREF(__pyx_t_12); - __pyx_t_10 = __pyx_t_12; - } - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __pyx_t_10; - __Pyx_INCREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __pyx_v_yc; - __Pyx_INCREF(__pyx_t_10); - __Pyx_XDECREF_SET(__pyx_v_x1a, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_XDECREF_SET(__pyx_v_y1a, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_x2a, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_y2a, __pyx_t_10); - __pyx_t_10 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":218 - * elif i == 1: # top right - * x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - * x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h # <<<<<<<<<<<<<< - * elif i == 2: # bottom left - * x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - */ - __pyx_t_10 = __pyx_int_0; - __Pyx_INCREF(__pyx_t_10); - __pyx_t_4 = PyNumber_Subtract(__pyx_v_y2a, __pyx_v_y1a); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 218, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyNumber_Subtract(__pyx_v_h, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 218, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Subtract(__pyx_v_x2a, __pyx_v_x1a); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 218, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_w); - __pyx_t_6 = __pyx_v_w; - __pyx_t_11 = PyObject_RichCompare(__pyx_t_4, __pyx_t_6, Py_LT); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 218, __pyx_L1_error) - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 218, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (__pyx_t_14) { - __Pyx_INCREF(__pyx_t_4); - __pyx_t_12 = __pyx_t_4; - } else { - __Pyx_INCREF(__pyx_t_6); - __pyx_t_12 = __pyx_t_6; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __pyx_t_12; - __Pyx_INCREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __pyx_v_h; - __Pyx_INCREF(__pyx_t_12); - __Pyx_XDECREF_SET(__pyx_v_x1b, __pyx_t_10); - __pyx_t_10 = 0; - __Pyx_XDECREF_SET(__pyx_v_y1b, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_x2b, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_y2b, __pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":216 - * x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - * x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - * elif i == 1: # top right # <<<<<<<<<<<<<< - * x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - * x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - */ - goto __pyx_L11; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":219 - * x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - * x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - * elif i == 2: # bottom left # <<<<<<<<<<<<<< - * x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - * x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - */ - __pyx_t_12 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_2, 2, 0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":220 - * x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - * elif i == 2: # bottom left - * x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) # <<<<<<<<<<<<<< - * x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - * elif i == 3: # bottom right - */ - __pyx_t_15 = 0; - __pyx_t_12 = PyNumber_Subtract(__pyx_v_xc, __pyx_v_w); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_t_15); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_1, __pyx_t_12, Py_GT); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (__pyx_t_14) { - __pyx_t_10 = __Pyx_PyInt_From_long(__pyx_t_15); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_4 = __pyx_t_10; - __pyx_t_10 = 0; - } else { - __Pyx_INCREF(__pyx_t_12); - __pyx_t_4 = __pyx_t_12; - } - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __pyx_t_4; - __Pyx_INCREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __pyx_v_yc; - __Pyx_INCREF(__pyx_t_4); - __pyx_t_10 = __pyx_v_xc; - __Pyx_INCREF(__pyx_t_10); - __pyx_t_1 = PyNumber_Add(__pyx_v_yc, __pyx_v_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = __Pyx_PyInt_MultiplyObjC(__pyx_cur_scope->__pyx_v_s, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_13 = PyObject_RichCompare(__pyx_t_1, __pyx_t_6, Py_LT); __Pyx_XGOTREF(__pyx_t_13); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 220, __pyx_L1_error) - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_13); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 220, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - if (__pyx_t_14) { - __Pyx_INCREF(__pyx_t_1); - __pyx_t_11 = __pyx_t_1; - } else { - __Pyx_INCREF(__pyx_t_6); - __pyx_t_11 = __pyx_t_6; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __pyx_t_11; - __Pyx_INCREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_XDECREF_SET(__pyx_v_x1a, __pyx_t_12); - __pyx_t_12 = 0; - __Pyx_XDECREF_SET(__pyx_v_y1a, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_x2a, __pyx_t_10); - __pyx_t_10 = 0; - __Pyx_XDECREF_SET(__pyx_v_y2a, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":221 - * elif i == 2: # bottom left - * x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - * x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) # <<<<<<<<<<<<<< - * elif i == 3: # bottom right - * x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_x2a, __pyx_v_x1a); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 221, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = PyNumber_Subtract(__pyx_v_w, __pyx_t_1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 221, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __pyx_int_0; - __Pyx_INCREF(__pyx_t_1); - __pyx_t_4 = __pyx_v_w; - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_h); - __pyx_t_12 = __pyx_v_h; - __pyx_t_11 = PyNumber_Subtract(__pyx_v_y2a, __pyx_v_y1a); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 221, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_13 = PyObject_RichCompare(__pyx_t_12, __pyx_t_11, Py_LT); __Pyx_XGOTREF(__pyx_t_13); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 221, __pyx_L1_error) - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_13); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 221, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - if (__pyx_t_14) { - __Pyx_INCREF(__pyx_t_12); - __pyx_t_6 = __pyx_t_12; - } else { - __Pyx_INCREF(__pyx_t_11); - __pyx_t_6 = __pyx_t_11; - } - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __pyx_t_6; - __Pyx_INCREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF_SET(__pyx_v_x1b, __pyx_t_10); - __pyx_t_10 = 0; - __Pyx_XDECREF_SET(__pyx_v_y1b, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_x2b, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_y2b, __pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":219 - * x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - * x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - * elif i == 2: # bottom left # <<<<<<<<<<<<<< - * x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - * x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - */ - goto __pyx_L11; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":222 - * x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - * x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - * elif i == 3: # bottom right # <<<<<<<<<<<<<< - * x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - * x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - */ - __pyx_t_12 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_3, 3, 0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 222, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 222, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":223 - * x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - * elif i == 3: # bottom right - * x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) # <<<<<<<<<<<<<< - * x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - * - */ - __pyx_t_12 = __pyx_v_xc; - __Pyx_INCREF(__pyx_t_12); - __pyx_t_4 = __pyx_v_yc; - __Pyx_INCREF(__pyx_t_4); - __pyx_t_1 = __Pyx_PyInt_MultiplyObjC(__pyx_cur_scope->__pyx_v_s, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = PyNumber_Add(__pyx_v_xc, __pyx_v_w); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_1, __pyx_t_10, Py_LT); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 223, __pyx_L1_error) - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (__pyx_t_14) { - __Pyx_INCREF(__pyx_t_1); - __pyx_t_6 = __pyx_t_1; - } else { - __Pyx_INCREF(__pyx_t_10); - __pyx_t_6 = __pyx_t_10; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __pyx_t_6; - __Pyx_INCREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = PyNumber_Add(__pyx_v_yc, __pyx_v_h); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = __Pyx_PyInt_MultiplyObjC(__pyx_cur_scope->__pyx_v_s, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_13 = PyObject_RichCompare(__pyx_t_6, __pyx_t_10, Py_LT); __Pyx_XGOTREF(__pyx_t_13); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 223, __pyx_L1_error) - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_13); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 223, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - if (__pyx_t_14) { - __Pyx_INCREF(__pyx_t_6); - __pyx_t_11 = __pyx_t_6; - } else { - __Pyx_INCREF(__pyx_t_10); - __pyx_t_11 = __pyx_t_10; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __pyx_t_11; - __Pyx_INCREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_XDECREF_SET(__pyx_v_x1a, __pyx_t_12); - __pyx_t_12 = 0; - __Pyx_XDECREF_SET(__pyx_v_y1a, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_x2a, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_y2a, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":224 - * elif i == 3: # bottom right - * x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - * x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) # <<<<<<<<<<<<<< - * - * img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - */ - __pyx_t_6 = __pyx_int_0; - __Pyx_INCREF(__pyx_t_6); - __pyx_t_1 = __pyx_int_0; - __Pyx_INCREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Subtract(__pyx_v_x2a, __pyx_v_x1a); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_w); - __pyx_t_12 = __pyx_v_w; - __pyx_t_10 = PyObject_RichCompare(__pyx_t_4, __pyx_t_12, Py_LT); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 224, __pyx_L1_error) - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (__pyx_t_14) { - __Pyx_INCREF(__pyx_t_4); - __pyx_t_11 = __pyx_t_4; - } else { - __Pyx_INCREF(__pyx_t_12); - __pyx_t_11 = __pyx_t_12; - } - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __pyx_t_11; - __Pyx_INCREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_INCREF(__pyx_v_h); - __pyx_t_11 = __pyx_v_h; - __pyx_t_12 = PyNumber_Subtract(__pyx_v_y2a, __pyx_v_y1a); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_13 = PyObject_RichCompare(__pyx_t_11, __pyx_t_12, Py_LT); __Pyx_XGOTREF(__pyx_t_13); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 224, __pyx_L1_error) - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_13); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - if (__pyx_t_14) { - __Pyx_INCREF(__pyx_t_11); - __pyx_t_10 = __pyx_t_11; - } else { - __Pyx_INCREF(__pyx_t_12); - __pyx_t_10 = __pyx_t_12; - } - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = __pyx_t_10; - __Pyx_INCREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_XDECREF_SET(__pyx_v_x1b, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_XDECREF_SET(__pyx_v_y1b, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_x2b, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_y2b, __pyx_t_11); - __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":222 - * x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - * x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - * elif i == 3: # bottom right # <<<<<<<<<<<<<< - * x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - * x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - */ - } - __pyx_L11:; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":226 - * x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - * - * img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] # <<<<<<<<<<<<<< - * padw = x1a - x1b - * padh = y1a - y1b - */ - if (unlikely(!__pyx_v_y1b)) { __Pyx_RaiseUnboundLocalError("y1b"); __PYX_ERR(0, 226, __pyx_L1_error) } - if (unlikely(!__pyx_v_y2b)) { __Pyx_RaiseUnboundLocalError("y2b"); __PYX_ERR(0, 226, __pyx_L1_error) } - __pyx_t_11 = PySlice_New(__pyx_v_y1b, __pyx_v_y2b, Py_None); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - if (unlikely(!__pyx_v_x1b)) { __Pyx_RaiseUnboundLocalError("x1b"); __PYX_ERR(0, 226, __pyx_L1_error) } - if (unlikely(!__pyx_v_x2b)) { __Pyx_RaiseUnboundLocalError("x2b"); __PYX_ERR(0, 226, __pyx_L1_error) } - __pyx_t_4 = PySlice_New(__pyx_v_x1b, __pyx_v_x2b, Py_None); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_11); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_11); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_4); - __pyx_t_11 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_v_img, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_v_img4)) { __Pyx_RaiseUnboundLocalError("img4"); __PYX_ERR(0, 226, __pyx_L1_error) } - if (unlikely(!__pyx_v_y1a)) { __Pyx_RaiseUnboundLocalError("y1a"); __PYX_ERR(0, 226, __pyx_L1_error) } - if (unlikely(!__pyx_v_y2a)) { __Pyx_RaiseUnboundLocalError("y2a"); __PYX_ERR(0, 226, __pyx_L1_error) } - __pyx_t_1 = PySlice_New(__pyx_v_y1a, __pyx_v_y2a, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely(!__pyx_v_x1a)) { __Pyx_RaiseUnboundLocalError("x1a"); __PYX_ERR(0, 226, __pyx_L1_error) } - if (unlikely(!__pyx_v_x2a)) { __Pyx_RaiseUnboundLocalError("x2a"); __PYX_ERR(0, 226, __pyx_L1_error) } - __pyx_t_11 = PySlice_New(__pyx_v_x1a, __pyx_v_x2a, Py_None); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_11); - PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_11); - __pyx_t_1 = 0; - __pyx_t_11 = 0; - if (unlikely((PyObject_SetItem(__pyx_v_img4, __pyx_t_6, __pyx_t_4) < 0))) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":227 - * - * img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - * padw = x1a - x1b # <<<<<<<<<<<<<< - * padh = y1a - y1b - * - */ - if (unlikely(!__pyx_v_x1a)) { __Pyx_RaiseUnboundLocalError("x1a"); __PYX_ERR(0, 227, __pyx_L1_error) } - if (unlikely(!__pyx_v_x1b)) { __Pyx_RaiseUnboundLocalError("x1b"); __PYX_ERR(0, 227, __pyx_L1_error) } - __pyx_t_4 = PyNumber_Subtract(__pyx_v_x1a, __pyx_v_x1b); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 227, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_padw, __pyx_t_4); - __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":228 - * img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - * padw = x1a - x1b - * padh = y1a - y1b # <<<<<<<<<<<<<< - * - * # Labels - */ - if (unlikely(!__pyx_v_y1a)) { __Pyx_RaiseUnboundLocalError("y1a"); __PYX_ERR(0, 228, __pyx_L1_error) } - if (unlikely(!__pyx_v_y1b)) { __Pyx_RaiseUnboundLocalError("y1b"); __PYX_ERR(0, 228, __pyx_L1_error) } - __pyx_t_4 = PyNumber_Subtract(__pyx_v_y1a, __pyx_v_y1b); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_padh, __pyx_t_4); - __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":231 - * - * # Labels - * labels, segments = self.labels[index].copy(), self.segments[index].copy() # <<<<<<<<<<<<<< - * if labels.size: - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_labels); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_11 = __Pyx_PyObject_GetItem(__pyx_t_6, __pyx_v_index); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_copy); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_11)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_11, }; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_7, 0+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_segments); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_t_11, __pyx_v_index); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_copy); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_11))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_11); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_11); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_11, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_1, }; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_11, __pyx_callargs+1-__pyx_t_7, 0+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - __Pyx_XDECREF_SET(__pyx_v_labels, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_segments, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":232 - * # Labels - * labels, segments = self.labels[index].copy(), self.segments[index].copy() - * if labels.size: # <<<<<<<<<<<<<< - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - * segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_labels, __pyx_n_s_size); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 232, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":233 - * labels, segments = self.labels[index].copy(), self.segments[index].copy() - * if labels.size: - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format # <<<<<<<<<<<<<< - * segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - * labels4.append(labels) - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_xywhn2xyxy); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_11 = __Pyx_PyObject_GetItem(__pyx_v_labels, __pyx_tuple__17); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_1 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[6] = {__pyx_t_1, __pyx_t_11, __pyx_v_w, __pyx_v_h, __pyx_v_padw, __pyx_v_padh}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_7, 5+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - if (unlikely((PyObject_SetItem(__pyx_v_labels, __pyx_tuple__17, __pyx_t_6) < 0))) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":234 - * if labels.size: - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - * segments = [xyn2xy(x, w, h, padw, padh) for x in segments] # <<<<<<<<<<<<<< - * labels4.append(labels) - * segments4.extend(segments) - */ - { /* enter inner scope */ - __pyx_t_6 = PyList_New(0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 234, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_6); - if (likely(PyList_CheckExact(__pyx_v_segments)) || PyTuple_CheckExact(__pyx_v_segments)) { - __pyx_t_4 = __pyx_v_segments; __Pyx_INCREF(__pyx_t_4); __pyx_t_16 = 0; - __pyx_t_17 = NULL; - } else { - __pyx_t_16 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_segments); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 234, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_17 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); if (unlikely(!__pyx_t_17)) __PYX_ERR(0, 234, __pyx_L15_error) - } - for (;;) { - if (likely(!__pyx_t_17)) { - if (likely(PyList_CheckExact(__pyx_t_4))) { - if (__pyx_t_16 >= PyList_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_11 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_16); __Pyx_INCREF(__pyx_t_11); __pyx_t_16++; if (unlikely((0 < 0))) __PYX_ERR(0, 234, __pyx_L15_error) - #else - __pyx_t_11 = PySequence_ITEM(__pyx_t_4, __pyx_t_16); __pyx_t_16++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 234, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_11); - #endif - } else { - if (__pyx_t_16 >= PyTuple_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_11 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_16); __Pyx_INCREF(__pyx_t_11); __pyx_t_16++; if (unlikely((0 < 0))) __PYX_ERR(0, 234, __pyx_L15_error) - #else - __pyx_t_11 = PySequence_ITEM(__pyx_t_4, __pyx_t_16); __pyx_t_16++; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 234, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_11); - #endif - } - } else { - __pyx_t_11 = __pyx_t_17(__pyx_t_4); - if (unlikely(!__pyx_t_11)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 234, __pyx_L15_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_11); - } - __Pyx_XDECREF_SET(__pyx_8genexpr5__pyx_v_x, __pyx_t_11); - __pyx_t_11 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_xyn2xy); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 234, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[6] = {__pyx_t_10, __pyx_8genexpr5__pyx_v_x, __pyx_v_w, __pyx_v_h, __pyx_v_padw, __pyx_v_padh}; - __pyx_t_11 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_7, 5+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 234, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - if (unlikely(__Pyx_ListComp_Append(__pyx_t_6, (PyObject*)__pyx_t_11))) __PYX_ERR(0, 234, __pyx_L15_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_8genexpr5__pyx_v_x); __pyx_8genexpr5__pyx_v_x = 0; - goto __pyx_L18_exit_scope; - __pyx_L15_error:; - __Pyx_XDECREF(__pyx_8genexpr5__pyx_v_x); __pyx_8genexpr5__pyx_v_x = 0; - goto __pyx_L1_error; - __pyx_L18_exit_scope:; - } /* exit inner scope */ - __Pyx_DECREF_SET(__pyx_v_segments, __pyx_t_6); - __pyx_t_6 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":232 - * # Labels - * labels, segments = self.labels[index].copy(), self.segments[index].copy() - * if labels.size: # <<<<<<<<<<<<<< - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - * segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":235 - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - * segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - * labels4.append(labels) # <<<<<<<<<<<<<< - * segments4.extend(segments) - * - */ - __pyx_t_18 = __Pyx_PyObject_Append(__pyx_v_labels4, __pyx_v_labels); if (unlikely(__pyx_t_18 == ((int)-1))) __PYX_ERR(0, 235, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":236 - * segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - * labels4.append(labels) - * segments4.extend(segments) # <<<<<<<<<<<<<< - * - * # Concat/clip labels - */ - __pyx_t_18 = __Pyx_PyList_Extend(__pyx_v_segments4, __pyx_v_segments); if (unlikely(__pyx_t_18 == ((int)-1))) __PYX_ERR(0, 236, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":207 - * indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - * random.shuffle(indices) - * for i, index in enumerate(indices): # <<<<<<<<<<<<<< - * # Load image - * img, _, (h, w) = load_image(self, index) - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":239 - * - * # Concat/clip labels - * labels4 = np.concatenate(labels4, 0) # <<<<<<<<<<<<<< - * for x in (labels4[:, 1:], *segments4): - * np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_concatenate); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_labels4, __pyx_int_0}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_7, 2+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __Pyx_DECREF_SET(__pyx_v_labels4, __pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":240 - * # Concat/clip labels - * labels4 = np.concatenate(labels4, 0) - * for x in (labels4[:, 1:], *segments4): # <<<<<<<<<<<<<< - * np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - * # img4, labels4 = replicate(img4, labels4) # replicate - */ - __pyx_t_6 = __Pyx_PyObject_GetItem(__pyx_v_labels4, __pyx_tuple__17); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_6); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_3 = __pyx_t_2; - __pyx_t_2 = 0; - if (__Pyx_PyList_Extend(__pyx_t_3, __pyx_v_segments4) < 0) __PYX_ERR(0, 240, __pyx_L1_error) - { - PyObject *__pyx_temp = PyList_AsTuple(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_temp; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - } - __pyx_t_2 = __pyx_t_3; __Pyx_INCREF(__pyx_t_2); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - for (;;) { - if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_8); __Pyx_INCREF(__pyx_t_3); __pyx_t_8++; if (unlikely((0 < 0))) __PYX_ERR(0, 240, __pyx_L1_error) - #else - __pyx_t_3 = PySequence_ITEM(__pyx_t_2, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_XDECREF_SET(__pyx_v_x, __pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":241 - * labels4 = np.concatenate(labels4, 0) - * for x in (labels4[:, 1:], *segments4): - * np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() # <<<<<<<<<<<<<< - * # img4, labels4 = replicate(img4, labels4) # replicate - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_clip); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_cur_scope->__pyx_v_s, 2, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_x); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_out, __pyx_v_x) < 0) __PYX_ERR(0, 241, __pyx_L1_error) - __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_4, __pyx_t_3); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":240 - * # Concat/clip labels - * labels4 = np.concatenate(labels4, 0) - * for x in (labels4[:, 1:], *segments4): # <<<<<<<<<<<<<< - * np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - * # img4, labels4 = replicate(img4, labels4) # replicate - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":244 - * # img4, labels4 = replicate(img4, labels4) # replicate - * - * return img4, labels4 # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - if (unlikely(!__pyx_v_img4)) { __Pyx_RaiseUnboundLocalError("img4"); __PYX_ERR(0, 244, __pyx_L1_error) } - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_img4); - __Pyx_GIVEREF(__pyx_v_img4); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_img4); - __Pyx_INCREF(__pyx_v_labels4); - __Pyx_GIVEREF(__pyx_v_labels4); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_labels4); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":200 - * - * - * def load_mosaic(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - * labels4, segments4 = [], [] - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_13); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.load_mosaic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_labels4); - __Pyx_XDECREF(__pyx_v_segments4); - __Pyx_XDECREF(__pyx_v_yc); - __Pyx_XDECREF(__pyx_v_xc); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XDECREF(__pyx_v_i); - __Pyx_XDECREF(__pyx_v_img); - __Pyx_XDECREF(__pyx_v__); - __Pyx_XDECREF(__pyx_v_h); - __Pyx_XDECREF(__pyx_v_w); - __Pyx_XDECREF(__pyx_v_img4); - __Pyx_XDECREF(__pyx_v_x1a); - __Pyx_XDECREF(__pyx_v_y1a); - __Pyx_XDECREF(__pyx_v_x2a); - __Pyx_XDECREF(__pyx_v_y2a); - __Pyx_XDECREF(__pyx_v_x1b); - __Pyx_XDECREF(__pyx_v_y1b); - __Pyx_XDECREF(__pyx_v_x2b); - __Pyx_XDECREF(__pyx_v_y2b); - __Pyx_XDECREF(__pyx_v_padw); - __Pyx_XDECREF(__pyx_v_padh); - __Pyx_XDECREF(__pyx_v_labels); - __Pyx_XDECREF(__pyx_v_segments); - __Pyx_XDECREF(__pyx_v_x); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic_2generator1); - __Pyx_XDECREF(__pyx_8genexpr5__pyx_v_x); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":247 - * - * - * def load_mosaic9(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - * labels9, segments9 = [], [] - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_13load_mosaic9(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_13load_mosaic9 = {"load_mosaic9", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_13load_mosaic9, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_13load_mosaic9(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_index = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("load_mosaic9 (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_index,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_index,0}; - #endif - PyObject* values[2] = {0,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_self)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 247, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_index)) != 0)) kw_args--; - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 247, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("load_mosaic9", 1, 2, 2, 1); __PYX_ERR(0, 247, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "load_mosaic9") < 0)) __PYX_ERR(0, 247, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_index = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("load_mosaic9", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 247, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.load_mosaic9", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9(__pyx_self, __pyx_v_self, __pyx_v_index); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_2generator2(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":280 - * - * padx, pady = c[:2] - * x1, y1, x2, y2 = (max(x, 0) for x in c) # allocate coords # <<<<<<<<<<<<<< - * - * # Labels - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 280, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_2generator2, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_load_mosaic9_locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils); if (unlikely(!gen)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.load_mosaic9.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_2generator2(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 280, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_c)) { __Pyx_RaiseClosureNameError("c"); __PYX_ERR(0, 280, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_c)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_c)) { - __pyx_t_1 = __pyx_cur_scope->__pyx_outer_scope->__pyx_v_c; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_c); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 280, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_2 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 280, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 280, __pyx_L1_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 280, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_x); - __pyx_t_4 = __pyx_cur_scope->__pyx_v_x; - __pyx_t_7 = __Pyx_PyInt_From_long(__pyx_t_5); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = PyObject_RichCompare(__pyx_t_7, __pyx_t_4, Py_GT); __Pyx_XGOTREF(__pyx_t_8); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely((__pyx_t_9 < 0))) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (__pyx_t_9) { - __pyx_t_8 = __Pyx_PyInt_From_long(__pyx_t_5); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_6 = __pyx_t_8; - __pyx_t_8 = 0; - } else { - __Pyx_INCREF(__pyx_t_4); - __pyx_t_6 = __pyx_t_4; - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_INCREF(__pyx_t_6); - __pyx_r = __pyx_t_6; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XGIVEREF(__pyx_t_1); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_1; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_3; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_1 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_3 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 280, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_5generator3(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":295 - * - * # Offset - * yc, xc = (int(random.uniform(0, s)) for _ in self.mosaic_border) # mosaic center x, y # <<<<<<<<<<<<<< - * img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] - * - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_3genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 295, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_5generator3, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_load_mosaic9_locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils); if (unlikely(!gen)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.load_mosaic9.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_5generator3(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 295, __pyx_L1_error) - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_self)) { __Pyx_RaiseClosureNameError("self"); __PYX_ERR(0, 295, __pyx_L1_error) } - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_self, __pyx_n_s_mosaic_border); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 295, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 295, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_1); __pyx_t_3++; if (unlikely((0 < 0))) __PYX_ERR(0, 295, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 295, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v__); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v__, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_random); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_uniform); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_s)) { __Pyx_RaiseClosureNameError("s"); __PYX_ERR(0, 295, __pyx_L1_error) } - __pyx_t_5 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_7 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_5, __pyx_int_0, __pyx_cur_scope->__pyx_outer_scope->__pyx_v_s}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_7, 2+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = __Pyx_PyNumber_Int(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - __Pyx_XGIVEREF(__pyx_t_2); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_3; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_4; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield:; - __pyx_t_2 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_2); - __pyx_t_3 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_4 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 295, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":247 - * - * - * def load_mosaic9(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - * labels9, segments9 = [], [] - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *__pyx_cur_scope; - PyObject *__pyx_v_labels9 = NULL; - PyObject *__pyx_v_segments9 = NULL; - PyObject *__pyx_v_indices = NULL; - PyObject *__pyx_v_i = NULL; - PyObject *__pyx_v_img = NULL; - CYTHON_UNUSED PyObject *__pyx_v__ = NULL; - PyObject *__pyx_v_h = NULL; - PyObject *__pyx_v_w = NULL; - PyObject *__pyx_v_img9 = NULL; - PyObject *__pyx_v_h0 = NULL; - PyObject *__pyx_v_w0 = NULL; - PyObject *__pyx_v_padx = NULL; - PyObject *__pyx_v_pady = NULL; - PyObject *__pyx_v_x1 = NULL; - PyObject *__pyx_v_y1 = NULL; - PyObject *__pyx_v_x2 = NULL; - PyObject *__pyx_v_y2 = NULL; - PyObject *__pyx_v_labels = NULL; - PyObject *__pyx_v_segments = NULL; - PyObject *__pyx_v_hp = NULL; - PyObject *__pyx_v_wp = NULL; - PyObject *__pyx_v_yc = NULL; - PyObject *__pyx_v_xc = NULL; - PyObject *__pyx_v_x = NULL; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_2generator2 = 0; - PyObject *__pyx_8genexpr7__pyx_v_x = NULL; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_5generator3 = 0; - PyObject *__pyx_8genexpr9__pyx_v_x = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *(*__pyx_t_11)(PyObject *); - PyObject *__pyx_t_12 = NULL; - PyObject *__pyx_t_13 = NULL; - int __pyx_t_14; - Py_ssize_t __pyx_t_15; - PyObject *(*__pyx_t_16)(PyObject *); - int __pyx_t_17; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("load_mosaic9", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 247, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_self = __pyx_v_self; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_self); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_self); - __Pyx_INCREF(__pyx_v_index); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":249 - * def load_mosaic9(self, index): - * # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - * labels9, segments9 = [], [] # <<<<<<<<<<<<<< - * s = self.img_size - * indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_labels9 = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_segments9 = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":250 - * # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - * labels9, segments9 = [], [] - * s = self.img_size # <<<<<<<<<<<<<< - * indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - * random.shuffle(indices) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_img_size); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_cur_scope->__pyx_v_s = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":251 - * labels9, segments9 = [], [] - * s = self.img_size - * indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices # <<<<<<<<<<<<<< - * random.shuffle(indices) - * for i, index in enumerate(indices): - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_v_index); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_random); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_choices); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_indices); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_k, __pyx_int_8) < 0) __PYX_ERR(0, 251, __pyx_L1_error) - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_indices = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":252 - * s = self.img_size - * indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - * random.shuffle(indices) # <<<<<<<<<<<<<< - * for i, index in enumerate(indices): - * # Load image - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_random); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_shuffle); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_5, __pyx_v_indices}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_6, 1+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":253 - * indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - * random.shuffle(indices) - * for i, index in enumerate(indices): # <<<<<<<<<<<<<< - * # Load image - * img, _, (h, w) = load_image(self, index) - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_1 = __pyx_int_0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_2 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_2); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - } else { - __pyx_t_7 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 253, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_8)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_7); __Pyx_INCREF(__pyx_t_5); __pyx_t_7++; if (unlikely((0 < 0))) __PYX_ERR(0, 253, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_7); __Pyx_INCREF(__pyx_t_5); __pyx_t_7++; if (unlikely((0 < 0))) __PYX_ERR(0, 253, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_8(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 253, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_5); - __pyx_t_5 = 0; - __Pyx_INCREF(__pyx_t_1); - __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_1); - __pyx_t_5 = __Pyx_PyInt_AddObjC(__pyx_t_1, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); - __pyx_t_1 = __pyx_t_5; - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":255 - * for i, index in enumerate(indices): - * # Load image - * img, _, (h, w) = load_image(self, index) # <<<<<<<<<<<<<< - * - * # place img in img9 - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_load_image); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_cur_scope->__pyx_v_self, __pyx_v_index}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_5))) || (PyList_CheckExact(__pyx_t_5))) { - PyObject* sequence = __pyx_t_5; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 255, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_4 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_9 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_9); - #else - __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_9 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_10 = PyObject_GetIter(__pyx_t_5); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_11 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_10); - index = 0; __pyx_t_4 = __pyx_t_11(__pyx_t_10); if (unlikely(!__pyx_t_4)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_4); - index = 1; __pyx_t_3 = __pyx_t_11(__pyx_t_10); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 2; __pyx_t_9 = __pyx_t_11(__pyx_t_10); if (unlikely(!__pyx_t_9)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_11(__pyx_t_10), 3) < 0) __PYX_ERR(0, 255, __pyx_L1_error) - __pyx_t_11 = NULL; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_11 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 255, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_img, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v__, __pyx_t_3); - __pyx_t_3 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_9))) || (PyList_CheckExact(__pyx_t_9))) { - PyObject* sequence = __pyx_t_9; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 255, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_10 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_12 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_10 = PyList_GET_ITEM(sequence, 0); - __pyx_t_12 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(__pyx_t_12); - #else - __pyx_t_10 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_12 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_13 = PyObject_GetIter(__pyx_t_9); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_11 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_13); - index = 0; __pyx_t_10 = __pyx_t_11(__pyx_t_13); if (unlikely(!__pyx_t_10)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_10); - index = 1; __pyx_t_12 = __pyx_t_11(__pyx_t_13); if (unlikely(!__pyx_t_12)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_12); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_11(__pyx_t_13), 2) < 0) __PYX_ERR(0, 255, __pyx_L1_error) - __pyx_t_11 = NULL; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_11 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 255, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_h, __pyx_t_10); - __pyx_t_10 = 0; - __Pyx_XDECREF_SET(__pyx_v_w, __pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":258 - * - * # place img in img9 - * if i == 0: # center # <<<<<<<<<<<<<< - * img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - * h0, w0 = h, w - */ - __pyx_t_5 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_0, 0, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 258, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":259 - * # place img in img9 - * if i == 0: # center - * img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles # <<<<<<<<<<<<<< - * h0, w0 = h, w - * c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_full); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyInt_MultiplyObjC(__pyx_cur_scope->__pyx_v_s, __pyx_int_3, 3, 0, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_PyInt_MultiplyObjC(__pyx_cur_scope->__pyx_v_s, __pyx_int_3, 3, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_img, __pyx_n_s_shape); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_12 = __Pyx_GetItemInt(__pyx_t_4, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_12); - __pyx_t_5 = 0; - __pyx_t_3 = 0; - __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_4); - __Pyx_INCREF(__pyx_int_114); - __Pyx_GIVEREF(__pyx_int_114); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_int_114); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_uint8); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_9, __pyx_t_12, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 259, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_img9, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":260 - * if i == 0: # center - * img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - * h0, w0 = h, w # <<<<<<<<<<<<<< - * c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - * elif i == 1: # top - */ - __pyx_t_5 = __pyx_v_h; - __Pyx_INCREF(__pyx_t_5); - __pyx_t_4 = __pyx_v_w; - __Pyx_INCREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_h0, __pyx_t_5); - __pyx_t_5 = 0; - __Pyx_XDECREF_SET(__pyx_v_w0, __pyx_t_4); - __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":261 - * img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - * h0, w0 = h, w - * c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates # <<<<<<<<<<<<<< - * elif i == 1: # top - * c = s, s - h, s + w, s - */ - __pyx_t_4 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_12 = PyTuple_New(4); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 261, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_cur_scope->__pyx_v_s); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_12, 2, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_12, 3, __pyx_t_5); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":258 - * - * # place img in img9 - * if i == 0: # center # <<<<<<<<<<<<<< - * img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - * h0, w0 = h, w - */ - goto __pyx_L9; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":262 - * h0, w0 = h, w - * c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - * elif i == 1: # top # <<<<<<<<<<<<<< - * c = s, s - h, s + w, s - * elif i == 2: # top right - */ - __pyx_t_12 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_1, 1, 0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 262, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 262, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":263 - * c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - * elif i == 1: # top - * c = s, s - h, s + w, s # <<<<<<<<<<<<<< - * elif i == 2: # top right - * c = s + wp, s - h, s + wp + w, s - */ - __pyx_t_12 = PyNumber_Subtract(__pyx_cur_scope->__pyx_v_s, __pyx_v_h); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_5 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = PyTuple_New(4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_5); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_cur_scope->__pyx_v_s); - __pyx_t_12 = 0; - __pyx_t_5 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":262 - * h0, w0 = h, w - * c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - * elif i == 1: # top # <<<<<<<<<<<<<< - * c = s, s - h, s + w, s - * elif i == 2: # top right - */ - goto __pyx_L9; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":264 - * elif i == 1: # top - * c = s, s - h, s + w, s - * elif i == 2: # top right # <<<<<<<<<<<<<< - * c = s + wp, s - h, s + wp + w, s - * elif i == 3: # right - */ - __pyx_t_4 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_2, 2, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 264, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 264, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":265 - * c = s, s - h, s + w, s - * elif i == 2: # top right - * c = s + wp, s - h, s + wp + w, s # <<<<<<<<<<<<<< - * elif i == 3: # right - * c = s + w0, s, s + w0 + w, s + h - */ - if (unlikely(!__pyx_v_wp)) { __Pyx_RaiseUnboundLocalError("wp"); __PYX_ERR(0, 265, __pyx_L1_error) } - __pyx_t_4 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_wp); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyNumber_Subtract(__pyx_cur_scope->__pyx_v_s, __pyx_v_h); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(!__pyx_v_wp)) { __Pyx_RaiseUnboundLocalError("wp"); __PYX_ERR(0, 265, __pyx_L1_error) } - __pyx_t_12 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_wp); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_9 = PyNumber_Add(__pyx_t_12, __pyx_v_w); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(4); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_12, 2, __pyx_t_9); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_12, 3, __pyx_cur_scope->__pyx_v_s); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_t_9 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":264 - * elif i == 1: # top - * c = s, s - h, s + w, s - * elif i == 2: # top right # <<<<<<<<<<<<<< - * c = s + wp, s - h, s + wp + w, s - * elif i == 3: # right - */ - goto __pyx_L9; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":266 - * elif i == 2: # top right - * c = s + wp, s - h, s + wp + w, s - * elif i == 3: # right # <<<<<<<<<<<<<< - * c = s + w0, s, s + w0 + w, s + h - * elif i == 4: # bottom right - */ - __pyx_t_12 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_3, 3, 0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 266, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 266, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":267 - * c = s + wp, s - h, s + wp + w, s - * elif i == 3: # right - * c = s + w0, s, s + w0 + w, s + h # <<<<<<<<<<<<<< - * elif i == 4: # bottom right - * c = s + w0, s + hp, s + w0 + w, s + hp + h - */ - if (unlikely(!__pyx_v_w0)) { __Pyx_RaiseUnboundLocalError("w0"); __PYX_ERR(0, 267, __pyx_L1_error) } - __pyx_t_12 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (unlikely(!__pyx_v_w0)) { __Pyx_RaiseUnboundLocalError("w0"); __PYX_ERR(0, 267, __pyx_L1_error) } - __pyx_t_9 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_5 = PyNumber_Add(__pyx_t_9, __pyx_v_w); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_4 = PyTuple_New(4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 267, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_12); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_t_9); - __pyx_t_12 = 0; - __pyx_t_5 = 0; - __pyx_t_9 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":266 - * elif i == 2: # top right - * c = s + wp, s - h, s + wp + w, s - * elif i == 3: # right # <<<<<<<<<<<<<< - * c = s + w0, s, s + w0 + w, s + h - * elif i == 4: # bottom right - */ - goto __pyx_L9; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":268 - * elif i == 3: # right - * c = s + w0, s, s + w0 + w, s + h - * elif i == 4: # bottom right # <<<<<<<<<<<<<< - * c = s + w0, s + hp, s + w0 + w, s + hp + h - * elif i == 5: # bottom - */ - __pyx_t_4 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_4, 4, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 268, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":269 - * c = s + w0, s, s + w0 + w, s + h - * elif i == 4: # bottom right - * c = s + w0, s + hp, s + w0 + w, s + hp + h # <<<<<<<<<<<<<< - * elif i == 5: # bottom - * c = s + w0 - w, s + h0, s + w0, s + h0 + h - */ - if (unlikely(!__pyx_v_w0)) { __Pyx_RaiseUnboundLocalError("w0"); __PYX_ERR(0, 269, __pyx_L1_error) } - __pyx_t_4 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 269, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (unlikely(!__pyx_v_hp)) { __Pyx_RaiseUnboundLocalError("hp"); __PYX_ERR(0, 269, __pyx_L1_error) } - __pyx_t_9 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_hp); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 269, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (unlikely(!__pyx_v_w0)) { __Pyx_RaiseUnboundLocalError("w0"); __PYX_ERR(0, 269, __pyx_L1_error) } - __pyx_t_5 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 269, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_12 = PyNumber_Add(__pyx_t_5, __pyx_v_w); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 269, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_v_hp)) { __Pyx_RaiseUnboundLocalError("hp"); __PYX_ERR(0, 269, __pyx_L1_error) } - __pyx_t_5 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_hp); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 269, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyNumber_Add(__pyx_t_5, __pyx_v_h); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 269, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 269, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_9); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_3); - __pyx_t_4 = 0; - __pyx_t_9 = 0; - __pyx_t_12 = 0; - __pyx_t_3 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":268 - * elif i == 3: # right - * c = s + w0, s, s + w0 + w, s + h - * elif i == 4: # bottom right # <<<<<<<<<<<<<< - * c = s + w0, s + hp, s + w0 + w, s + hp + h - * elif i == 5: # bottom - */ - goto __pyx_L9; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":270 - * elif i == 4: # bottom right - * c = s + w0, s + hp, s + w0 + w, s + hp + h - * elif i == 5: # bottom # <<<<<<<<<<<<<< - * c = s + w0 - w, s + h0, s + w0, s + h0 + h - * elif i == 6: # bottom left - */ - __pyx_t_5 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_5, 5, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 270, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 270, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":271 - * c = s + w0, s + hp, s + w0 + w, s + hp + h - * elif i == 5: # bottom - * c = s + w0 - w, s + h0, s + w0, s + h0 + h # <<<<<<<<<<<<<< - * elif i == 6: # bottom left - * c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - */ - if (unlikely(!__pyx_v_w0)) { __Pyx_RaiseUnboundLocalError("w0"); __PYX_ERR(0, 271, __pyx_L1_error) } - __pyx_t_5 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyNumber_Subtract(__pyx_t_5, __pyx_v_w); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_v_h0)) { __Pyx_RaiseUnboundLocalError("h0"); __PYX_ERR(0, 271, __pyx_L1_error) } - __pyx_t_5 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(!__pyx_v_w0)) { __Pyx_RaiseUnboundLocalError("w0"); __PYX_ERR(0, 271, __pyx_L1_error) } - __pyx_t_12 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (unlikely(!__pyx_v_h0)) { __Pyx_RaiseUnboundLocalError("h0"); __PYX_ERR(0, 271, __pyx_L1_error) } - __pyx_t_9 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_4 = PyNumber_Add(__pyx_t_9, __pyx_v_h); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyTuple_New(4); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 271, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_9, 2, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_9, 3, __pyx_t_4); - __pyx_t_3 = 0; - __pyx_t_5 = 0; - __pyx_t_12 = 0; - __pyx_t_4 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_9); - __Pyx_GIVEREF(__pyx_t_9); - __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":270 - * elif i == 4: # bottom right - * c = s + w0, s + hp, s + w0 + w, s + hp + h - * elif i == 5: # bottom # <<<<<<<<<<<<<< - * c = s + w0 - w, s + h0, s + w0, s + h0 + h - * elif i == 6: # bottom left - */ - goto __pyx_L9; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":272 - * elif i == 5: # bottom - * c = s + w0 - w, s + h0, s + w0, s + h0 + h - * elif i == 6: # bottom left # <<<<<<<<<<<<<< - * c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - * elif i == 7: # left - */ - __pyx_t_9 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_6, 6, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 272, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 272, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":273 - * c = s + w0 - w, s + h0, s + w0, s + h0 + h - * elif i == 6: # bottom left - * c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h # <<<<<<<<<<<<<< - * elif i == 7: # left - * c = s - w, s + h0 - h, s, s + h0 - */ - if (unlikely(!__pyx_v_w0)) { __Pyx_RaiseUnboundLocalError("w0"); __PYX_ERR(0, 273, __pyx_L1_error) } - __pyx_t_9 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (unlikely(!__pyx_v_wp)) { __Pyx_RaiseUnboundLocalError("wp"); __PYX_ERR(0, 273, __pyx_L1_error) } - __pyx_t_4 = PyNumber_Subtract(__pyx_t_9, __pyx_v_wp); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyNumber_Subtract(__pyx_t_4, __pyx_v_w); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_v_h0)) { __Pyx_RaiseUnboundLocalError("h0"); __PYX_ERR(0, 273, __pyx_L1_error) } - __pyx_t_4 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (unlikely(!__pyx_v_w0)) { __Pyx_RaiseUnboundLocalError("w0"); __PYX_ERR(0, 273, __pyx_L1_error) } - __pyx_t_12 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_w0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (unlikely(!__pyx_v_wp)) { __Pyx_RaiseUnboundLocalError("wp"); __PYX_ERR(0, 273, __pyx_L1_error) } - __pyx_t_5 = PyNumber_Subtract(__pyx_t_12, __pyx_v_wp); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (unlikely(!__pyx_v_h0)) { __Pyx_RaiseUnboundLocalError("h0"); __PYX_ERR(0, 273, __pyx_L1_error) } - __pyx_t_12 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_3 = PyNumber_Add(__pyx_t_12, __pyx_v_h); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(4); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 273, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_9); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_12, 2, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_12, 3, __pyx_t_3); - __pyx_t_9 = 0; - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_t_3 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":272 - * elif i == 5: # bottom - * c = s + w0 - w, s + h0, s + w0, s + h0 + h - * elif i == 6: # bottom left # <<<<<<<<<<<<<< - * c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - * elif i == 7: # left - */ - goto __pyx_L9; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":274 - * elif i == 6: # bottom left - * c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - * elif i == 7: # left # <<<<<<<<<<<<<< - * c = s - w, s + h0 - h, s, s + h0 - * elif i == 8: # top left - */ - __pyx_t_12 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_7, 7, 0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 274, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_12); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 274, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":275 - * c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - * elif i == 7: # left - * c = s - w, s + h0 - h, s, s + h0 # <<<<<<<<<<<<<< - * elif i == 8: # top left - * c = s - w, s + h0 - hp - h, s, s + h0 - hp - */ - __pyx_t_12 = PyNumber_Subtract(__pyx_cur_scope->__pyx_v_s, __pyx_v_w); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - if (unlikely(!__pyx_v_h0)) { __Pyx_RaiseUnboundLocalError("h0"); __PYX_ERR(0, 275, __pyx_L1_error) } - __pyx_t_3 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyNumber_Subtract(__pyx_t_3, __pyx_v_h); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_v_h0)) { __Pyx_RaiseUnboundLocalError("h0"); __PYX_ERR(0, 275, __pyx_L1_error) } - __pyx_t_3 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_12); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_5); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_t_3); - __pyx_t_12 = 0; - __pyx_t_5 = 0; - __pyx_t_3 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":274 - * elif i == 6: # bottom left - * c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - * elif i == 7: # left # <<<<<<<<<<<<<< - * c = s - w, s + h0 - h, s, s + h0 - * elif i == 8: # top left - */ - goto __pyx_L9; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":276 - * elif i == 7: # left - * c = s - w, s + h0 - h, s, s + h0 - * elif i == 8: # top left # <<<<<<<<<<<<<< - * c = s - w, s + h0 - hp - h, s, s + h0 - hp - * - */ - __pyx_t_4 = __Pyx_PyInt_EqObjC(__pyx_v_i, __pyx_int_8, 8, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 276, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 276, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":277 - * c = s - w, s + h0 - h, s, s + h0 - * elif i == 8: # top left - * c = s - w, s + h0 - hp - h, s, s + h0 - hp # <<<<<<<<<<<<<< - * - * padx, pady = c[:2] - */ - __pyx_t_4 = PyNumber_Subtract(__pyx_cur_scope->__pyx_v_s, __pyx_v_w); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (unlikely(!__pyx_v_h0)) { __Pyx_RaiseUnboundLocalError("h0"); __PYX_ERR(0, 277, __pyx_L1_error) } - __pyx_t_3 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (unlikely(!__pyx_v_hp)) { __Pyx_RaiseUnboundLocalError("hp"); __PYX_ERR(0, 277, __pyx_L1_error) } - __pyx_t_5 = PyNumber_Subtract(__pyx_t_3, __pyx_v_hp); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Subtract(__pyx_t_5, __pyx_v_h); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_v_h0)) { __Pyx_RaiseUnboundLocalError("h0"); __PYX_ERR(0, 277, __pyx_L1_error) } - __pyx_t_5 = PyNumber_Add(__pyx_cur_scope->__pyx_v_s, __pyx_v_h0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(!__pyx_v_hp)) { __Pyx_RaiseUnboundLocalError("hp"); __PYX_ERR(0, 277, __pyx_L1_error) } - __pyx_t_12 = PyNumber_Subtract(__pyx_t_5, __pyx_v_hp); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_cur_scope->__pyx_v_s); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_12); - __pyx_t_4 = 0; - __pyx_t_3 = 0; - __pyx_t_12 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":276 - * elif i == 7: # left - * c = s - w, s + h0 - h, s, s + h0 - * elif i == 8: # top left # <<<<<<<<<<<<<< - * c = s - w, s + h0 - hp - h, s, s + h0 - hp - * - */ - } - __pyx_L9:; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":279 - * c = s - w, s + h0 - hp - h, s, s + h0 - hp - * - * padx, pady = c[:2] # <<<<<<<<<<<<<< - * x1, y1, x2, y2 = (max(x, 0) for x in c) # allocate coords - * - */ - if (unlikely(!__pyx_cur_scope->__pyx_v_c)) { __Pyx_RaiseUnboundLocalError("c"); __PYX_ERR(0, 279, __pyx_L1_error) } - __pyx_t_5 = __Pyx_PyTuple_GetSlice(__pyx_cur_scope->__pyx_v_c, 0, 2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (1) { - PyObject* sequence = __pyx_t_5; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 279, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_12 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_12); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_12 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 279, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_XDECREF_SET(__pyx_v_padx, __pyx_t_12); - __pyx_t_12 = 0; - __Pyx_XDECREF_SET(__pyx_v_pady, __pyx_t_3); - __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":280 - * - * padx, pady = c[:2] - * x1, y1, x2, y2 = (max(x, 0) for x in c) # allocate coords # <<<<<<<<<<<<<< - * - * # Labels - */ - __pyx_t_5 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if ((likely(PyTuple_CheckExact(__pyx_t_5))) || (PyList_CheckExact(__pyx_t_5))) { - PyObject* sequence = __pyx_t_5; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 280, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_12 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_12 = PyList_GET_ITEM(sequence, 1); - __pyx_t_4 = PyList_GET_ITEM(sequence, 2); - __pyx_t_9 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_12); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_9); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_3,&__pyx_t_12,&__pyx_t_4,&__pyx_t_9}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_3,&__pyx_t_12,&__pyx_t_4,&__pyx_t_9}; - __pyx_t_10 = PyObject_GetIter(__pyx_t_5); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_11 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_10); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_11(__pyx_t_10); if (unlikely(!item)) goto __pyx_L10_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_11(__pyx_t_10), 4) < 0) __PYX_ERR(0, 280, __pyx_L1_error) - __pyx_t_11 = NULL; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - goto __pyx_L11_unpacking_done; - __pyx_L10_unpacking_failed:; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_11 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 280, __pyx_L1_error) - __pyx_L11_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_x1, __pyx_t_3); - __pyx_t_3 = 0; - __Pyx_XDECREF_SET(__pyx_v_y1, __pyx_t_12); - __pyx_t_12 = 0; - __Pyx_XDECREF_SET(__pyx_v_x2, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_y2, __pyx_t_9); - __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":283 - * - * # Labels - * labels, segments = self.labels[index].copy(), self.segments[index].copy() # <<<<<<<<<<<<<< - * if labels.size: - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_labels); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_t_9, __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_copy); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_4, }; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_6, 0+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_self, __pyx_n_s_segments); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_12 = __Pyx_PyObject_GetItem(__pyx_t_4, __pyx_v_index); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_copy); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_12)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_12); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_12, }; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_6, 0+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 283, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __Pyx_XDECREF_SET(__pyx_v_labels, __pyx_t_5); - __pyx_t_5 = 0; - __Pyx_XDECREF_SET(__pyx_v_segments, __pyx_t_9); - __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":284 - * # Labels - * labels, segments = self.labels[index].copy(), self.segments[index].copy() - * if labels.size: # <<<<<<<<<<<<<< - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - * segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_labels, __pyx_n_s_size); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_14 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely((__pyx_t_14 < 0))) __PYX_ERR(0, 284, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_14) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":285 - * labels, segments = self.labels[index].copy(), self.segments[index].copy() - * if labels.size: - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format # <<<<<<<<<<<<<< - * segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - * labels9.append(labels) - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_xywhn2xyxy); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 285, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_v_labels, __pyx_tuple__17); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 285, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_12 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_12)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_12); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[6] = {__pyx_t_12, __pyx_t_4, __pyx_v_w, __pyx_v_h, __pyx_v_padx, __pyx_v_pady}; - __pyx_t_9 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_6, 5+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 285, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - if (unlikely((PyObject_SetItem(__pyx_v_labels, __pyx_tuple__17, __pyx_t_9) < 0))) __PYX_ERR(0, 285, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":286 - * if labels.size: - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - * segments = [xyn2xy(x, w, h, padx, pady) for x in segments] # <<<<<<<<<<<<<< - * labels9.append(labels) - * segments9.extend(segments) - */ - { /* enter inner scope */ - __pyx_t_9 = PyList_New(0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 286, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_9); - if (likely(PyList_CheckExact(__pyx_v_segments)) || PyTuple_CheckExact(__pyx_v_segments)) { - __pyx_t_5 = __pyx_v_segments; __Pyx_INCREF(__pyx_t_5); __pyx_t_15 = 0; - __pyx_t_16 = NULL; - } else { - __pyx_t_15 = -1; __pyx_t_5 = PyObject_GetIter(__pyx_v_segments); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 286, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_16 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_5); if (unlikely(!__pyx_t_16)) __PYX_ERR(0, 286, __pyx_L15_error) - } - for (;;) { - if (likely(!__pyx_t_16)) { - if (likely(PyList_CheckExact(__pyx_t_5))) { - if (__pyx_t_15 >= PyList_GET_SIZE(__pyx_t_5)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_5, __pyx_t_15); __Pyx_INCREF(__pyx_t_4); __pyx_t_15++; if (unlikely((0 < 0))) __PYX_ERR(0, 286, __pyx_L15_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_5, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 286, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - if (__pyx_t_15 >= PyTuple_GET_SIZE(__pyx_t_5)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_5, __pyx_t_15); __Pyx_INCREF(__pyx_t_4); __pyx_t_15++; if (unlikely((0 < 0))) __PYX_ERR(0, 286, __pyx_L15_error) - #else - __pyx_t_4 = PySequence_ITEM(__pyx_t_5, __pyx_t_15); __pyx_t_15++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 286, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_16(__pyx_t_5); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 286, __pyx_L15_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XDECREF_SET(__pyx_8genexpr7__pyx_v_x, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_xyn2xy); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 286, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_3 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_12); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_12, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[6] = {__pyx_t_3, __pyx_8genexpr7__pyx_v_x, __pyx_v_w, __pyx_v_h, __pyx_v_padx, __pyx_v_pady}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_12, __pyx_callargs+1-__pyx_t_6, 5+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 286, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - if (unlikely(__Pyx_ListComp_Append(__pyx_t_9, (PyObject*)__pyx_t_4))) __PYX_ERR(0, 286, __pyx_L15_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_x); __pyx_8genexpr7__pyx_v_x = 0; - goto __pyx_L18_exit_scope; - __pyx_L15_error:; - __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_x); __pyx_8genexpr7__pyx_v_x = 0; - goto __pyx_L1_error; - __pyx_L18_exit_scope:; - } /* exit inner scope */ - __Pyx_DECREF_SET(__pyx_v_segments, __pyx_t_9); - __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":284 - * # Labels - * labels, segments = self.labels[index].copy(), self.segments[index].copy() - * if labels.size: # <<<<<<<<<<<<<< - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - * segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":287 - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - * segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - * labels9.append(labels) # <<<<<<<<<<<<<< - * segments9.extend(segments) - * - */ - __pyx_t_17 = __Pyx_PyObject_Append(__pyx_v_labels9, __pyx_v_labels); if (unlikely(__pyx_t_17 == ((int)-1))) __PYX_ERR(0, 287, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":288 - * segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - * labels9.append(labels) - * segments9.extend(segments) # <<<<<<<<<<<<<< - * - * # Image - */ - __pyx_t_17 = __Pyx_PyList_Extend(__pyx_v_segments9, __pyx_v_segments); if (unlikely(__pyx_t_17 == ((int)-1))) __PYX_ERR(0, 288, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":291 - * - * # Image - * img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax] # <<<<<<<<<<<<<< - * hp, wp = h, w # height, width previous - * - */ - __pyx_t_9 = PyNumber_Subtract(__pyx_v_y1, __pyx_v_pady); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_5 = PySlice_New(__pyx_t_9, Py_None, Py_None); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyNumber_Subtract(__pyx_v_x1, __pyx_v_padx); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_4 = PySlice_New(__pyx_t_9, Py_None, Py_None); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyTuple_New(2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_4); - __pyx_t_5 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_v_img, __pyx_t_9); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely(!__pyx_v_img9)) { __Pyx_RaiseUnboundLocalError("img9"); __PYX_ERR(0, 291, __pyx_L1_error) } - __pyx_t_9 = PySlice_New(__pyx_v_y1, __pyx_v_y2, Py_None); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_5 = PySlice_New(__pyx_v_x1, __pyx_v_x2, Py_None); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GIVEREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_9); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_5); - __pyx_t_9 = 0; - __pyx_t_5 = 0; - if (unlikely((PyObject_SetItem(__pyx_v_img9, __pyx_t_12, __pyx_t_4) < 0))) __PYX_ERR(0, 291, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":292 - * # Image - * img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax] - * hp, wp = h, w # height, width previous # <<<<<<<<<<<<<< - * - * # Offset - */ - __pyx_t_4 = __pyx_v_h; - __Pyx_INCREF(__pyx_t_4); - __pyx_t_12 = __pyx_v_w; - __Pyx_INCREF(__pyx_t_12); - __Pyx_XDECREF_SET(__pyx_v_hp, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_wp, __pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":253 - * indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - * random.shuffle(indices) - * for i, index in enumerate(indices): # <<<<<<<<<<<<<< - * # Load image - * img, _, (h, w) = load_image(self, index) - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":295 - * - * # Offset - * yc, xc = (int(random.uniform(0, s)) for _ in self.mosaic_border) # mosaic center x, y # <<<<<<<<<<<<<< - * img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] - * - */ - __pyx_t_1 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_3genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 295, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_12 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_12 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_12); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_12 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_11 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); - index = 0; __pyx_t_2 = __pyx_t_11(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L19_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_12 = __pyx_t_11(__pyx_t_4); if (unlikely(!__pyx_t_12)) goto __pyx_L19_unpacking_failed; - __Pyx_GOTREF(__pyx_t_12); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_11(__pyx_t_4), 2) < 0) __PYX_ERR(0, 295, __pyx_L1_error) - __pyx_t_11 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L20_unpacking_done; - __pyx_L19_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_11 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 295, __pyx_L1_error) - __pyx_L20_unpacking_done:; - } - __pyx_v_yc = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_xc = __pyx_t_12; - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":296 - * # Offset - * yc, xc = (int(random.uniform(0, s)) for _ in self.mosaic_border) # mosaic center x, y - * img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] # <<<<<<<<<<<<<< - * - * # Concat/clip labels - */ - if (unlikely(!__pyx_v_img9)) { __Pyx_RaiseUnboundLocalError("img9"); __PYX_ERR(0, 296, __pyx_L1_error) } - __pyx_t_1 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_cur_scope->__pyx_v_s, 2, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_12 = PyNumber_Add(__pyx_v_yc, __pyx_t_1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PySlice_New(__pyx_v_yc, __pyx_t_12, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_cur_scope->__pyx_v_s, 2, 0, 0); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_2 = PyNumber_Add(__pyx_v_xc, __pyx_t_12); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PySlice_New(__pyx_v_xc, __pyx_t_2, Py_None); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_12); - __pyx_t_1 = 0; - __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyObject_GetItem(__pyx_v_img9, __pyx_t_2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_img9, __pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":299 - * - * # Concat/clip labels - * labels9 = np.concatenate(labels9, 0) # <<<<<<<<<<<<<< - * labels9[:, [1, 3]] -= xc - * labels9[:, [2, 4]] -= yc - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 299, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_concatenate); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 299, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_labels9, __pyx_int_0}; - __pyx_t_12 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 299, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_DECREF_SET(__pyx_v_labels9, __pyx_t_12); - __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":300 - * # Concat/clip labels - * labels9 = np.concatenate(labels9, 0) - * labels9[:, [1, 3]] -= xc # <<<<<<<<<<<<<< - * labels9[:, [2, 4]] -= yc - * c = np.array([xc, yc]) # centers - */ - __pyx_t_12 = PyList_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyList_SET_ITEM(__pyx_t_12, 0, __pyx_int_1); - __Pyx_INCREF(__pyx_int_3); - __Pyx_GIVEREF(__pyx_int_3); - PyList_SET_ITEM(__pyx_t_12, 1, __pyx_int_3); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_slice__15); - __Pyx_GIVEREF(__pyx_slice__15); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_slice__15); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_12); - __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_PyObject_GetItem(__pyx_v_labels9, __pyx_t_1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_2 = PyNumber_InPlaceSubtract(__pyx_t_12, __pyx_v_xc); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (unlikely((PyObject_SetItem(__pyx_v_labels9, __pyx_t_1, __pyx_t_2) < 0))) __PYX_ERR(0, 300, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":301 - * labels9 = np.concatenate(labels9, 0) - * labels9[:, [1, 3]] -= xc - * labels9[:, [2, 4]] -= yc # <<<<<<<<<<<<<< - * c = np.array([xc, yc]) # centers - * segments9 = [x - c for x in segments9] - */ - __pyx_t_1 = PyList_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_int_2); - __Pyx_GIVEREF(__pyx_int_2); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_int_2); - __Pyx_INCREF(__pyx_int_4); - __Pyx_GIVEREF(__pyx_int_4); - PyList_SET_ITEM(__pyx_t_1, 1, __pyx_int_4); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_slice__15); - __Pyx_GIVEREF(__pyx_slice__15); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_slice__15); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_v_labels9, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_12 = PyNumber_InPlaceSubtract(__pyx_t_1, __pyx_v_yc); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely((PyObject_SetItem(__pyx_v_labels9, __pyx_t_2, __pyx_t_12) < 0))) __PYX_ERR(0, 301, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":302 - * labels9[:, [1, 3]] -= xc - * labels9[:, [2, 4]] -= yc - * c = np.array([xc, yc]) # centers # <<<<<<<<<<<<<< - * segments9 = [x - c for x in segments9] - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_np); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_12, __pyx_n_s_array); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyList_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_v_xc); - __Pyx_GIVEREF(__pyx_v_xc); - PyList_SET_ITEM(__pyx_t_12, 0, __pyx_v_xc); - __Pyx_INCREF(__pyx_v_yc); - __Pyx_GIVEREF(__pyx_v_yc); - PyList_SET_ITEM(__pyx_t_12, 1, __pyx_v_yc); - __pyx_t_4 = NULL; - __pyx_t_6 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_6 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_t_12}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_6, 1+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_c); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_c, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":303 - * labels9[:, [2, 4]] -= yc - * c = np.array([xc, yc]) # centers - * segments9 = [x - c for x in segments9] # <<<<<<<<<<<<<< - * - * for x in (labels9[:, 1:], *segments9): - */ - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 303, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __pyx_v_segments9; __Pyx_INCREF(__pyx_t_1); __pyx_t_7 = 0; - for (;;) { - if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_12 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_7); __Pyx_INCREF(__pyx_t_12); __pyx_t_7++; if (unlikely((0 < 0))) __PYX_ERR(0, 303, __pyx_L23_error) - #else - __pyx_t_12 = PySequence_ITEM(__pyx_t_1, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 303, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_12); - #endif - __Pyx_XDECREF_SET(__pyx_8genexpr9__pyx_v_x, __pyx_t_12); - __pyx_t_12 = 0; - __pyx_t_12 = PyNumber_Subtract(__pyx_8genexpr9__pyx_v_x, __pyx_cur_scope->__pyx_v_c); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 303, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_12); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_12))) __PYX_ERR(0, 303, __pyx_L23_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_8genexpr9__pyx_v_x); __pyx_8genexpr9__pyx_v_x = 0; - goto __pyx_L26_exit_scope; - __pyx_L23_error:; - __Pyx_XDECREF(__pyx_8genexpr9__pyx_v_x); __pyx_8genexpr9__pyx_v_x = 0; - goto __pyx_L1_error; - __pyx_L26_exit_scope:; - } /* exit inner scope */ - __Pyx_DECREF_SET(__pyx_v_segments9, ((PyObject*)__pyx_t_2)); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":305 - * segments9 = [x - c for x in segments9] - * - * for x in (labels9[:, 1:], *segments9): # <<<<<<<<<<<<<< - * np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - * # img9, labels9 = replicate(img9, labels9) # replicate - */ - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_v_labels9, __pyx_tuple__17); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 305, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_12 = PyList_New(1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 305, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GIVEREF(__pyx_t_1); - PyList_SET_ITEM(__pyx_t_12, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_2 = __pyx_t_12; - __pyx_t_12 = 0; - if (__Pyx_PyList_Extend(__pyx_t_2, __pyx_v_segments9) < 0) __PYX_ERR(0, 305, __pyx_L1_error) - { - PyObject *__pyx_temp = PyList_AsTuple(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); - __pyx_t_2 = __pyx_temp; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 305, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - } - __pyx_t_12 = __pyx_t_2; __Pyx_INCREF(__pyx_t_12); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_12)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_12, __pyx_t_7); __Pyx_INCREF(__pyx_t_2); __pyx_t_7++; if (unlikely((0 < 0))) __PYX_ERR(0, 305, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_12, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 305, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - __Pyx_XDECREF_SET(__pyx_v_x, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":306 - * - * for x in (labels9[:, 1:], *segments9): - * np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() # <<<<<<<<<<<<<< - * # img9, labels9 = replicate(img9, labels9) # replicate - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_clip); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_cur_scope->__pyx_v_s, 2, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_x); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_out, __pyx_v_x) < 0) __PYX_ERR(0, 306, __pyx_L1_error) - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, __pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":305 - * segments9 = [x - c for x in segments9] - * - * for x in (labels9[:, 1:], *segments9): # <<<<<<<<<<<<<< - * np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - * # img9, labels9 = replicate(img9, labels9) # replicate - */ - } - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":310 - * - * - * return img9, labels9 # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_v_img9); - __Pyx_GIVEREF(__pyx_v_img9); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_v_img9); - __Pyx_INCREF(__pyx_v_labels9); - __Pyx_GIVEREF(__pyx_v_labels9); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_v_labels9); - __pyx_r = __pyx_t_12; - __pyx_t_12 = 0; - goto __pyx_L0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":247 - * - * - * def load_mosaic9(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - * labels9, segments9 = [], [] - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_13); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.load_mosaic9", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_labels9); - __Pyx_XDECREF(__pyx_v_segments9); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XDECREF(__pyx_v_i); - __Pyx_XDECREF(__pyx_v_img); - __Pyx_XDECREF(__pyx_v__); - __Pyx_XDECREF(__pyx_v_h); - __Pyx_XDECREF(__pyx_v_w); - __Pyx_XDECREF(__pyx_v_img9); - __Pyx_XDECREF(__pyx_v_h0); - __Pyx_XDECREF(__pyx_v_w0); - __Pyx_XDECREF(__pyx_v_padx); - __Pyx_XDECREF(__pyx_v_pady); - __Pyx_XDECREF(__pyx_v_x1); - __Pyx_XDECREF(__pyx_v_y1); - __Pyx_XDECREF(__pyx_v_x2); - __Pyx_XDECREF(__pyx_v_y2); - __Pyx_XDECREF(__pyx_v_labels); - __Pyx_XDECREF(__pyx_v_segments); - __Pyx_XDECREF(__pyx_v_hp); - __Pyx_XDECREF(__pyx_v_wp); - __Pyx_XDECREF(__pyx_v_yc); - __Pyx_XDECREF(__pyx_v_xc); - __Pyx_XDECREF(__pyx_v_x); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_2generator2); - __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_x); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_12load_mosaic9_5generator3); - __Pyx_XDECREF(__pyx_8genexpr9__pyx_v_x); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":313 - * - * - * def create_folder(path='./new'): # <<<<<<<<<<<<<< - * # Create folder - * if os.path.exists(path): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_15create_folder(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_15create_folder = {"create_folder", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_15create_folder, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_15create_folder(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_path = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("create_folder (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path,0}; - #endif - PyObject* values[1] = {0}; - values[0] = ((PyObject *)((PyObject*)__pyx_kp_u_new)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_path); - if (value) { values[0] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 313, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "create_folder") < 0)) __PYX_ERR(0, 313, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_path = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("create_folder", 0, 0, 1, __pyx_nargs); __PYX_ERR(0, 313, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.create_folder", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_14create_folder(__pyx_self, __pyx_v_path); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_14create_folder(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_path) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("create_folder", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":315 - * def create_folder(path='./new'): - * # Create folder - * if os.path.exists(path): # <<<<<<<<<<<<<< - * shutil.rmtree(path) # delete output folder - * os.makedirs(path) # make new output folder - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_os); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 315, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_path); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 315, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_exists); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 315, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_path}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 315, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 315, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_5) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":316 - * # Create folder - * if os.path.exists(path): - * shutil.rmtree(path) # delete output folder # <<<<<<<<<<<<<< - * os.makedirs(path) # make new output folder - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_shutil); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_rmtree); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_path}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":315 - * def create_folder(path='./new'): - * # Create folder - * if os.path.exists(path): # <<<<<<<<<<<<<< - * shutil.rmtree(path) # delete output folder - * os.makedirs(path) # make new output folder - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":317 - * if os.path.exists(path): - * shutil.rmtree(path) # delete output folder - * os.makedirs(path) # make new output folder # <<<<<<<<<<<<<< - * - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_os); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 317, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_makedirs); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 317, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_path}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 317, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":313 - * - * - * def create_folder(path='./new'): # <<<<<<<<<<<<<< - * # Create folder - * if os.path.exists(path): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.create_folder", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":320 - * - * - * def flatten_recursive(path='../datasets/coco128'): # <<<<<<<<<<<<<< - * # Flatten a recursive directory by bringing all files to top level - * new_path = Path(path + '_flat') - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_17flatten_recursive(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_17flatten_recursive = {"flatten_recursive", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_17flatten_recursive, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_17flatten_recursive(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_path = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("flatten_recursive (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path,0}; - #endif - PyObject* values[1] = {0}; - values[0] = ((PyObject *)((PyObject*)__pyx_kp_u_datasets_coco128)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_path); - if (value) { values[0] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 320, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "flatten_recursive") < 0)) __PYX_ERR(0, 320, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_path = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("flatten_recursive", 0, 0, 1, __pyx_nargs); __PYX_ERR(0, 320, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.flatten_recursive", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_16flatten_recursive(__pyx_self, __pyx_v_path); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_16flatten_recursive(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_path) { - PyObject *__pyx_v_new_path = NULL; - PyObject *__pyx_v_file = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - Py_ssize_t __pyx_t_8; - PyObject *(*__pyx_t_9)(PyObject *); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("flatten_recursive", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":322 - * def flatten_recursive(path='../datasets/coco128'): - * # Flatten a recursive directory by bringing all files to top level - * new_path = Path(path + '_flat') # <<<<<<<<<<<<<< - * create_folder(new_path) - * for file in tqdm(glob.glob(str(Path(path)) + '/[inserted by cython to avoid comment start]**[inserted by cython to avoid comment closer]/[inserted by cython to avoid comment start]*.*', recursive=True)): - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Path); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 322, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Add(__pyx_v_path, __pyx_n_u_flat); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 322, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_t_3}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 322, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_v_new_path = __pyx_t_1; - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":323 - * # Flatten a recursive directory by bringing all files to top level - * new_path = Path(path + '_flat') - * create_folder(new_path) # <<<<<<<<<<<<<< - * for file in tqdm(glob.glob(str(Path(path)) + '/[inserted by cython to avoid comment start]**[inserted by cython to avoid comment closer]/[inserted by cython to avoid comment start]*.*', recursive=True)): - * shutil.copyfile(file, new_path / Path(file).name) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_create_folder); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 323, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_new_path}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 323, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":324 - * new_path = Path(path + '_flat') - * create_folder(new_path) - * for file in tqdm(glob.glob(str(Path(path)) + '/[inserted by cython to avoid comment start]**[inserted by cython to avoid comment closer]/[inserted by cython to avoid comment start]*.*', recursive=True)): # <<<<<<<<<<<<<< - * shutil.copyfile(file, new_path / Path(file).name) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_tqdm); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_glob); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_glob); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_Path); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_v_path}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = __Pyx_PyObject_Str(__pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_6, __pyx_kp_u__18); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = PyTuple_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_recursive, Py_True) < 0) __PYX_ERR(0, 324, __pyx_L1_error) - __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_6, __pyx_t_3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_7}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_8 = 0; - __pyx_t_9 = NULL; - } else { - __pyx_t_8 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_9 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 324, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_9)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_8); __Pyx_INCREF(__pyx_t_1); __pyx_t_8++; if (unlikely((0 < 0))) __PYX_ERR(0, 324, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_8); __Pyx_INCREF(__pyx_t_1); __pyx_t_8++; if (unlikely((0 < 0))) __PYX_ERR(0, 324, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 324, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_9(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 324, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XDECREF_SET(__pyx_v_file, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":325 - * create_folder(new_path) - * for file in tqdm(glob.glob(str(Path(path)) + '/[inserted by cython to avoid comment start]**[inserted by cython to avoid comment closer]/[inserted by cython to avoid comment start]*.*', recursive=True)): - * shutil.copyfile(file, new_path / Path(file).name) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_shutil); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 325, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_copyfile); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 325, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_Path); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 325, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_v_file}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 325, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_name); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 325, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyNumber_Divide(__pyx_v_new_path, __pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 325, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_5 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_6, __pyx_v_file, __pyx_t_7}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 2+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 325, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":324 - * new_path = Path(path + '_flat') - * create_folder(new_path) - * for file in tqdm(glob.glob(str(Path(path)) + '/[inserted by cython to avoid comment start]**[inserted by cython to avoid comment closer]/[inserted by cython to avoid comment start]*.*', recursive=True)): # <<<<<<<<<<<<<< - * shutil.copyfile(file, new_path / Path(file).name) - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":320 - * - * - * def flatten_recursive(path='../datasets/coco128'): # <<<<<<<<<<<<<< - * # Flatten a recursive directory by bringing all files to top level - * new_path = Path(path + '_flat') - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.flatten_recursive", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_new_path); - __Pyx_XDECREF(__pyx_v_file); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":328 - * - * - * def extract_boxes(path='../datasets/coco128'): # from utils.datasets import *; extract_boxes() # <<<<<<<<<<<<<< - * # Convert detection dataset into classification dataset, with one directory per class - * path = Path(path) # images dir - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_19extract_boxes(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_19extract_boxes = {"extract_boxes", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_19extract_boxes, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_19extract_boxes(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_path = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("extract_boxes (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path,0}; - #endif - PyObject* values[1] = {0}; - values[0] = ((PyObject *)((PyObject*)__pyx_kp_u_datasets_coco128)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_path); - if (value) { values[0] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 328, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "extract_boxes") < 0)) __PYX_ERR(0, 328, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_path = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("extract_boxes", 0, 0, 1, __pyx_nargs); __PYX_ERR(0, 328, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.extract_boxes", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_18extract_boxes(__pyx_self, __pyx_v_path); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_18extract_boxes(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_path) { - PyObject *__pyx_v_files = NULL; - Py_ssize_t __pyx_v_n; - PyObject *__pyx_v_im_file = NULL; - PyObject *__pyx_v_im = NULL; - PyObject *__pyx_v_h = NULL; - PyObject *__pyx_v_w = NULL; - PyObject *__pyx_v_lb_file = NULL; - PyObject *__pyx_v_f = NULL; - PyObject *__pyx_v_lb = NULL; - PyObject *__pyx_v_j = NULL; - PyObject *__pyx_v_x = NULL; - PyObject *__pyx_v_c = NULL; - PyObject *__pyx_v_b = NULL; - PyObject *__pyx_9genexpr10__pyx_v_x = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - Py_ssize_t __pyx_t_8; - PyObject *(*__pyx_t_9)(PyObject *); - int __pyx_t_10; - PyObject *(*__pyx_t_11)(PyObject *); - PyObject *__pyx_t_12 = NULL; - PyObject *__pyx_t_13 = NULL; - PyObject *__pyx_t_14 = NULL; - PyObject *__pyx_t_15 = NULL; - PyObject *__pyx_t_16 = NULL; - PyObject *__pyx_t_17 = NULL; - PyObject *__pyx_t_18 = NULL; - Py_ssize_t __pyx_t_19; - PyObject *(*__pyx_t_20)(PyObject *); - PyObject *__pyx_t_21 = NULL; - Py_ssize_t __pyx_t_22; - Py_UCS4 __pyx_t_23; - PyObject *__pyx_t_24 = NULL; - PyObject *__pyx_t_25 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("extract_boxes", 0); - __Pyx_INCREF(__pyx_v_path); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":330 - * def extract_boxes(path='../datasets/coco128'): # from utils.datasets import *; extract_boxes() - * # Convert detection dataset into classification dataset, with one directory per class - * path = Path(path) # images dir # <<<<<<<<<<<<<< - * shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing - * files = list(path.rglob('*.*')) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Path); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 330, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_path}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 330, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF_SET(__pyx_v_path, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":331 - * # Convert detection dataset into classification dataset, with one directory per class - * path = Path(path) # images dir - * shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing # <<<<<<<<<<<<<< - * files = list(path.rglob('*.*')) - * n = len(files) # number of files - */ - __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_v_path, __pyx_n_u_classifier); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_is_dir); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_3, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_6) { - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_shutil); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_rmtree); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyNumber_Divide(__pyx_v_path, __pyx_n_u_classifier); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_7 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_t_5}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_1 = __pyx_t_2; - __pyx_t_2 = 0; - } else { - __Pyx_INCREF(Py_None); - __pyx_t_1 = Py_None; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":332 - * path = Path(path) # images dir - * shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing - * files = list(path.rglob('*.*')) # <<<<<<<<<<<<<< - * n = len(files) # number of files - * for im_file in tqdm(files, total=n): - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_path, __pyx_n_s_rglob); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 332, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_kp_u__4}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 332, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = __Pyx_PySequence_ListKeepNew(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 332, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_files = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":333 - * shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing - * files = list(path.rglob('*.*')) - * n = len(files) # number of files # <<<<<<<<<<<<<< - * for im_file in tqdm(files, total=n): - * if im_file.suffix[1:] in IMG_FORMATS: - */ - __pyx_t_8 = PyList_GET_SIZE(__pyx_v_files); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(0, 333, __pyx_L1_error) - __pyx_v_n = __pyx_t_8; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":334 - * files = list(path.rglob('*.*')) - * n = len(files) # number of files - * for im_file in tqdm(files, total=n): # <<<<<<<<<<<<<< - * if im_file.suffix[1:] in IMG_FORMATS: - * # image - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_tqdm); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_files); - __Pyx_GIVEREF(__pyx_v_files); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_files); - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_n); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_total, __pyx_t_5) < 0) __PYX_ERR(0, 334, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (likely(PyList_CheckExact(__pyx_t_5)) || PyTuple_CheckExact(__pyx_t_5)) { - __pyx_t_3 = __pyx_t_5; __Pyx_INCREF(__pyx_t_3); __pyx_t_8 = 0; - __pyx_t_9 = NULL; - } else { - __pyx_t_8 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_9 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 334, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - for (;;) { - if (likely(!__pyx_t_9)) { - if (likely(PyList_CheckExact(__pyx_t_3))) { - if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_8); __Pyx_INCREF(__pyx_t_5); __pyx_t_8++; if (unlikely((0 < 0))) __PYX_ERR(0, 334, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_8); __Pyx_INCREF(__pyx_t_5); __pyx_t_8++; if (unlikely((0 < 0))) __PYX_ERR(0, 334, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_9(__pyx_t_3); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 334, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_v_im_file, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":335 - * n = len(files) # number of files - * for im_file in tqdm(files, total=n): - * if im_file.suffix[1:] in IMG_FORMATS: # <<<<<<<<<<<<<< - * # image - * im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_im_file, __pyx_n_s_suffix); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 335, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = __Pyx_PyObject_GetSlice(__pyx_t_5, 1, 0, NULL, NULL, &__pyx_slice__16, 1, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 335, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_IMG_FORMATS); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 335, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = (__Pyx_PySequence_ContainsTF(__pyx_t_1, __pyx_t_5, Py_EQ)); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 335, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_10 = (__pyx_t_6 != 0); - if (__pyx_t_10) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":337 - * if im_file.suffix[1:] in IMG_FORMATS: - * # image - * im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB # <<<<<<<<<<<<<< - * h, w = im.shape[:2] - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_cv2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 337, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_imread); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 337, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_Str(__pyx_v_im_file); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 337, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_t_1}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 337, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_5, __pyx_tuple__19); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 337, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF_SET(__pyx_v_im, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":338 - * # image - * im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - * h, w = im.shape[:2] # <<<<<<<<<<<<<< - * - * # labels - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_im, __pyx_n_s_shape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 338, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyObject_GetSlice(__pyx_t_2, 0, 2, NULL, NULL, &__pyx_slice__14, 0, 1, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 338, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_5))) || (PyList_CheckExact(__pyx_t_5))) { - PyObject* sequence = __pyx_t_5; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 338, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 338, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 338, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_7 = PyObject_GetIter(__pyx_t_5); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 338, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_11 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - index = 0; __pyx_t_2 = __pyx_t_11(__pyx_t_7); if (unlikely(!__pyx_t_2)) goto __pyx_L6_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_11(__pyx_t_7); if (unlikely(!__pyx_t_1)) goto __pyx_L6_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_11(__pyx_t_7), 2) < 0) __PYX_ERR(0, 338, __pyx_L1_error) - __pyx_t_11 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L7_unpacking_done; - __pyx_L6_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_11 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 338, __pyx_L1_error) - __pyx_L7_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_h, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_w, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":341 - * - * # labels - * lb_file = Path(img2label_paths([str(im_file)])[0]) # <<<<<<<<<<<<<< - * if Path(lb_file).exists(): - * with open(lb_file) as f: - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Path); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 341, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_img2label_paths); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 341, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_12 = __Pyx_PyObject_Str(__pyx_v_im_file); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 341, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_13 = PyList_New(1); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 341, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_GIVEREF(__pyx_t_12); - PyList_SET_ITEM(__pyx_t_13, 0, __pyx_t_12); - __pyx_t_12 = 0; - __pyx_t_12 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_12)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_12); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_12, __pyx_t_13}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 341, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_t_7 = __Pyx_GetItemInt(__pyx_t_2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 341, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_7}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 341, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __Pyx_XDECREF_SET(__pyx_v_lb_file, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":342 - * # labels - * lb_file = Path(img2label_paths([str(im_file)])[0]) - * if Path(lb_file).exists(): # <<<<<<<<<<<<<< - * with open(lb_file) as f: - * lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_Path); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 342, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_lb_file}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 342, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_exists); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 342, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_1, }; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 342, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely((__pyx_t_10 < 0))) __PYX_ERR(0, 342, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_10) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":343 - * lb_file = Path(img2label_paths([str(im_file)])[0]) - * if Path(lb_file).exists(): - * with open(lb_file) as f: # <<<<<<<<<<<<<< - * lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - * - */ - /*with:*/ { - __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_builtin_open, __pyx_v_lb_file); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_14 = __Pyx_PyObject_LookupSpecial(__pyx_t_5, __pyx_n_s_exit); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __pyx_t_1 = __Pyx_PyObject_LookupSpecial(__pyx_t_5, __pyx_n_s_enter); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 343, __pyx_L9_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 343, __pyx_L9_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_t_1 = __pyx_t_7; - __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - /*try:*/ { - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_15, &__pyx_t_16, &__pyx_t_17); - __Pyx_XGOTREF(__pyx_t_15); - __Pyx_XGOTREF(__pyx_t_16); - __Pyx_XGOTREF(__pyx_t_17); - /*try:*/ { - __Pyx_XDECREF_SET(__pyx_v_f, __pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":344 - * if Path(lb_file).exists(): - * with open(lb_file) as f: - * lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels # <<<<<<<<<<<<<< - * - * for j, x in enumerate(lb): - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 344, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 344, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_v_f, __pyx_n_s_read); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_18 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_12))) { - __pyx_t_18 = PyMethod_GET_SELF(__pyx_t_12); - if (likely(__pyx_t_18)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12); - __Pyx_INCREF(__pyx_t_18); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_12, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_18, }; - __pyx_t_13 = __Pyx_PyObject_FastCall(__pyx_t_12, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_18); __pyx_t_18 = 0; - if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_13, __pyx_n_s_strip); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_13 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_12))) { - __pyx_t_13 = PyMethod_GET_SELF(__pyx_t_12); - if (likely(__pyx_t_13)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12); - __Pyx_INCREF(__pyx_t_13); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_12, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_13, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_12, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_splitlines); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_12))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_12); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_12, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_2, }; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_12, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_7)) || PyTuple_CheckExact(__pyx_t_7)) { - __pyx_t_12 = __pyx_t_7; __Pyx_INCREF(__pyx_t_12); __pyx_t_19 = 0; - __pyx_t_20 = NULL; - } else { - __pyx_t_19 = -1; __pyx_t_12 = PyObject_GetIter(__pyx_t_7); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_20 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_12); if (unlikely(!__pyx_t_20)) __PYX_ERR(0, 344, __pyx_L25_error) - } - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - for (;;) { - if (likely(!__pyx_t_20)) { - if (likely(PyList_CheckExact(__pyx_t_12))) { - if (__pyx_t_19 >= PyList_GET_SIZE(__pyx_t_12)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyList_GET_ITEM(__pyx_t_12, __pyx_t_19); __Pyx_INCREF(__pyx_t_7); __pyx_t_19++; if (unlikely((0 < 0))) __PYX_ERR(0, 344, __pyx_L25_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_12, __pyx_t_19); __pyx_t_19++; if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } else { - if (__pyx_t_19 >= PyTuple_GET_SIZE(__pyx_t_12)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_12, __pyx_t_19); __Pyx_INCREF(__pyx_t_7); __pyx_t_19++; if (unlikely((0 < 0))) __PYX_ERR(0, 344, __pyx_L25_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_12, __pyx_t_19); __pyx_t_19++; if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } - } else { - __pyx_t_7 = __pyx_t_20(__pyx_t_12); - if (unlikely(!__pyx_t_7)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 344, __pyx_L25_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_7); - } - __Pyx_XDECREF_SET(__pyx_9genexpr10__pyx_v_x, __pyx_t_7); - __pyx_t_7 = 0; - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_9genexpr10__pyx_v_x, __pyx_n_s_split); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_13 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_13 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_13)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_13); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_13, }; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_7))) __PYX_ERR(0, 344, __pyx_L25_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_9genexpr10__pyx_v_x); __pyx_9genexpr10__pyx_v_x = 0; - goto __pyx_L28_exit_scope; - __pyx_L25_error:; - __Pyx_XDECREF(__pyx_9genexpr10__pyx_v_x); __pyx_9genexpr10__pyx_v_x = 0; - goto __pyx_L15_error; - __pyx_L28_exit_scope:; - } /* exit inner scope */ - __pyx_t_12 = PyTuple_New(1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 344, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 344, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 344, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_float32); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 344, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_2) < 0) __PYX_ERR(0, 344, __pyx_L15_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_12, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 344, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_lb, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":343 - * lb_file = Path(img2label_paths([str(im_file)])[0]) - * if Path(lb_file).exists(): - * with open(lb_file) as f: # <<<<<<<<<<<<<< - * lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - * - */ - } - __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0; - __Pyx_XDECREF(__pyx_t_16); __pyx_t_16 = 0; - __Pyx_XDECREF(__pyx_t_17); __pyx_t_17 = 0; - goto __pyx_L22_try_end; - __pyx_L15_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0; - __Pyx_XDECREF(__pyx_t_18); __pyx_t_18 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - /*except:*/ { - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.extract_boxes", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_2, &__pyx_t_1, &__pyx_t_12) < 0) __PYX_ERR(0, 343, __pyx_L17_except_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_5 = PyTuple_Pack(3, __pyx_t_2, __pyx_t_1, __pyx_t_12); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 343, __pyx_L17_except_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_21 = __Pyx_PyObject_Call(__pyx_t_14, __pyx_t_5, NULL); - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_21)) __PYX_ERR(0, 343, __pyx_L17_except_error) - __Pyx_GOTREF(__pyx_t_21); - __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_t_21); - __Pyx_DECREF(__pyx_t_21); __pyx_t_21 = 0; - if (__pyx_t_10 < 0) __PYX_ERR(0, 343, __pyx_L17_except_error) - __pyx_t_6 = ((!(__pyx_t_10 != 0)) != 0); - if (unlikely(__pyx_t_6)) { - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_12); - __Pyx_ErrRestoreWithState(__pyx_t_2, __pyx_t_1, __pyx_t_12); - __pyx_t_2 = 0; __pyx_t_1 = 0; __pyx_t_12 = 0; - __PYX_ERR(0, 343, __pyx_L17_except_error) - } - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - goto __pyx_L16_exception_handled; - } - __pyx_L17_except_error:; - __Pyx_XGIVEREF(__pyx_t_15); - __Pyx_XGIVEREF(__pyx_t_16); - __Pyx_XGIVEREF(__pyx_t_17); - __Pyx_ExceptionReset(__pyx_t_15, __pyx_t_16, __pyx_t_17); - goto __pyx_L1_error; - __pyx_L16_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_15); - __Pyx_XGIVEREF(__pyx_t_16); - __Pyx_XGIVEREF(__pyx_t_17); - __Pyx_ExceptionReset(__pyx_t_15, __pyx_t_16, __pyx_t_17); - __pyx_L22_try_end:; - } - } - /*finally:*/ { - /*normal exit:*/{ - if (__pyx_t_14) { - __pyx_t_17 = __Pyx_PyObject_Call(__pyx_t_14, __pyx_tuple__20, NULL); - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - if (unlikely(!__pyx_t_17)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_17); - __Pyx_DECREF(__pyx_t_17); __pyx_t_17 = 0; - } - goto __pyx_L14; - } - __pyx_L14:; - } - goto __pyx_L32; - __pyx_L9_error:; - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - goto __pyx_L1_error; - __pyx_L32:; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":346 - * lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - * - * for j, x in enumerate(lb): # <<<<<<<<<<<<<< - * c = int(x[0]) # class - * f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_12 = __pyx_int_0; - if (unlikely(!__pyx_v_lb)) { __Pyx_RaiseUnboundLocalError("lb"); __PYX_ERR(0, 346, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_v_lb)) || PyTuple_CheckExact(__pyx_v_lb)) { - __pyx_t_1 = __pyx_v_lb; __Pyx_INCREF(__pyx_t_1); __pyx_t_19 = 0; - __pyx_t_20 = NULL; - } else { - __pyx_t_19 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_lb); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 346, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_20 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_20)) __PYX_ERR(0, 346, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_20)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_19 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_19); __Pyx_INCREF(__pyx_t_2); __pyx_t_19++; if (unlikely((0 < 0))) __PYX_ERR(0, 346, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_19); __pyx_t_19++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 346, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - if (__pyx_t_19 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_19); __Pyx_INCREF(__pyx_t_2); __pyx_t_19++; if (unlikely((0 < 0))) __PYX_ERR(0, 346, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_19); __pyx_t_19++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 346, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_20(__pyx_t_1); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 346, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - __Pyx_XDECREF_SET(__pyx_v_x, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_INCREF(__pyx_t_12); - __Pyx_XDECREF_SET(__pyx_v_j, __pyx_t_12); - __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_t_12, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 346, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_12); - __pyx_t_12 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":347 - * - * for j, x in enumerate(lb): - * c = int(x[0]) # class # <<<<<<<<<<<<<< - * f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - * if not f.parent.is_dir(): - */ - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_x, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 347, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyNumber_Int(__pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 347, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_c, __pyx_t_5); - __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":348 - * for j, x in enumerate(lb): - * c = int(x[0]) # class - * f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename # <<<<<<<<<<<<<< - * if not f.parent.is_dir(): - * f.parent.mkdir(parents=True) - */ - __pyx_t_5 = __Pyx_PyNumber_Divide(__pyx_v_path, __pyx_n_u_classifier); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_PyObject_FormatSimple(__pyx_v_c, __pyx_empty_unicode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyNumber_Divide(__pyx_t_5, __pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(6); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_22 = 0; - __pyx_t_23 = 127; - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_path, __pyx_n_s_stem); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_13 = __Pyx_PyObject_FormatSimple(__pyx_t_5, __pyx_empty_unicode); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_23 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_13) > __pyx_t_23) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_13) : __pyx_t_23; - __pyx_t_22 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_13); - __Pyx_GIVEREF(__pyx_t_13); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_13); - __pyx_t_13 = 0; - __Pyx_INCREF(__pyx_n_u__21); - __pyx_t_22 += 1; - __Pyx_GIVEREF(__pyx_n_u__21); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_n_u__21); - __pyx_t_13 = __Pyx_PyObject_GetAttrStr(__pyx_v_im_file, __pyx_n_s_stem); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __pyx_t_5 = __Pyx_PyObject_FormatSimple(__pyx_t_13, __pyx_empty_unicode); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_23 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_5) > __pyx_t_23) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_5) : __pyx_t_23; - __pyx_t_22 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5); - __pyx_t_5 = 0; - __Pyx_INCREF(__pyx_n_u__21); - __pyx_t_22 += 1; - __Pyx_GIVEREF(__pyx_n_u__21); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_n_u__21); - __pyx_t_5 = __Pyx_PyObject_FormatSimple(__pyx_v_j, __pyx_empty_unicode); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_23 = (__Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_5) > __pyx_t_23) ? __Pyx_PyUnicode_MAX_CHAR_VALUE(__pyx_t_5) : __pyx_t_23; - __pyx_t_22 += __Pyx_PyUnicode_GET_LENGTH(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_2, 4, __pyx_t_5); - __pyx_t_5 = 0; - __Pyx_INCREF(__pyx_kp_u_jpg); - __pyx_t_22 += 4; - __Pyx_GIVEREF(__pyx_kp_u_jpg); - PyTuple_SET_ITEM(__pyx_t_2, 5, __pyx_kp_u_jpg); - __pyx_t_5 = __Pyx_PyUnicode_Join(__pyx_t_2, 6, __pyx_t_22, __pyx_t_23); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyNumber_Divide(__pyx_t_7, __pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 348, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF_SET(__pyx_v_f, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":349 - * c = int(x[0]) # class - * f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - * if not f.parent.is_dir(): # <<<<<<<<<<<<<< - * f.parent.mkdir(parents=True) - * - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_f, __pyx_n_s_parent); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 349, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_is_dir); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 349, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_5, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 349, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 349, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_10 = ((!__pyx_t_6) != 0); - if (__pyx_t_10) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":350 - * f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - * if not f.parent.is_dir(): - * f.parent.mkdir(parents=True) # <<<<<<<<<<<<<< - * - * b = x[1:] * [w, h, w, h] # box - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_f, __pyx_n_s_parent); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 350, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_mkdir); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 350, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 350, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_parents, Py_True) < 0) __PYX_ERR(0, 350, __pyx_L1_error) - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_empty_tuple, __pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 350, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":349 - * c = int(x[0]) # class - * f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - * if not f.parent.is_dir(): # <<<<<<<<<<<<<< - * f.parent.mkdir(parents=True) - * - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":352 - * f.parent.mkdir(parents=True) - * - * b = x[1:] * [w, h, w, h] # box # <<<<<<<<<<<<<< - * # b[2:] = b[2:].max() # rectangle to square - * b[2:] = b[2:] * 1.2 + 3 # pad - */ - __pyx_t_5 = __Pyx_PyObject_GetSlice(__pyx_v_x, 1, 0, NULL, NULL, &__pyx_slice__16, 1, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 352, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = PyList_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 352, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_w); - __Pyx_GIVEREF(__pyx_v_w); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_v_w); - __Pyx_INCREF(__pyx_v_h); - __Pyx_GIVEREF(__pyx_v_h); - PyList_SET_ITEM(__pyx_t_2, 1, __pyx_v_h); - __Pyx_INCREF(__pyx_v_w); - __Pyx_GIVEREF(__pyx_v_w); - PyList_SET_ITEM(__pyx_t_2, 2, __pyx_v_w); - __Pyx_INCREF(__pyx_v_h); - __Pyx_GIVEREF(__pyx_v_h); - PyList_SET_ITEM(__pyx_t_2, 3, __pyx_v_h); - __pyx_t_7 = PyNumber_Multiply(__pyx_t_5, __pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 352, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_b, __pyx_t_7); - __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":354 - * b = x[1:] * [w, h, w, h] # box - * # b[2:] = b[2:].max() # rectangle to square - * b[2:] = b[2:] * 1.2 + 3 # pad # <<<<<<<<<<<<<< - * b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - * - */ - __pyx_t_7 = __Pyx_PyObject_GetSlice(__pyx_v_b, 2, 0, NULL, NULL, &__pyx_slice__22, 1, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 354, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_7, __pyx_float_1_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 354, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_2, __pyx_int_3, 3, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 354, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_PyObject_SetSlice(__pyx_v_b, __pyx_t_7, 2, 0, NULL, NULL, &__pyx_slice__22, 1, 0, 1) < 0) __PYX_ERR(0, 354, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":355 - * # b[2:] = b[2:].max() # rectangle to square - * b[2:] = b[2:] * 1.2 + 3 # pad - * b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) # <<<<<<<<<<<<<< - * - * b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - */ - __Pyx_GetModuleGlobalName(__pyx_t_13, __pyx_n_s_xywh2xyxy); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __pyx_t_18 = __Pyx_PyObject_GetAttrStr(__pyx_v_b, __pyx_n_s_reshape); if (unlikely(!__pyx_t_18)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_18); - __pyx_t_24 = __Pyx_PyObject_Call(__pyx_t_18, __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_24)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_24); - __Pyx_DECREF(__pyx_t_18); __pyx_t_18 = 0; - __pyx_t_18 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_13))) { - __pyx_t_18 = PyMethod_GET_SELF(__pyx_t_13); - if (likely(__pyx_t_18)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_13); - __Pyx_INCREF(__pyx_t_18); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_13, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_18, __pyx_t_24}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_13, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_18); __pyx_t_18 = 0; - __Pyx_DECREF(__pyx_t_24); __pyx_t_24 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - } - __pyx_t_13 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_ravel); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_13))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_13); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_13); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_13, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_5, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_13, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - } - __pyx_t_13 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_astype); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_int); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_13))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_13); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_13); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_13, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_5}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_13, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - } - __Pyx_DECREF_SET(__pyx_v_b, __pyx_t_7); - __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":357 - * b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - * - * b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image # <<<<<<<<<<<<<< - * b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - * assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - */ - __Pyx_GetModuleGlobalName(__pyx_t_13, __pyx_n_s_np); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 357, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_13, __pyx_n_s_clip); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 357, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_13 = PyList_New(2); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 357, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyList_SET_ITEM(__pyx_t_13, 0, __pyx_int_0); - __Pyx_INCREF(__pyx_int_2); - __Pyx_GIVEREF(__pyx_int_2); - PyList_SET_ITEM(__pyx_t_13, 1, __pyx_int_2); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_v_b, __pyx_t_13); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 357, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_13 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_13 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_13)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_13); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[4] = {__pyx_t_13, __pyx_t_2, __pyx_int_0, __pyx_v_w}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 357, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = PyList_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 357, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyList_SET_ITEM(__pyx_t_5, 0, __pyx_int_0); - __Pyx_INCREF(__pyx_int_2); - __Pyx_GIVEREF(__pyx_int_2); - PyList_SET_ITEM(__pyx_t_5, 1, __pyx_int_2); - if (unlikely((PyObject_SetItem(__pyx_v_b, __pyx_t_5, __pyx_t_7) < 0))) __PYX_ERR(0, 357, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":358 - * - * b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - * b[[1, 3]] = np.clip(b[[1, 3]], 0, h) # <<<<<<<<<<<<<< - * assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_clip); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyList_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyList_SET_ITEM(__pyx_t_5, 0, __pyx_int_1); - __Pyx_INCREF(__pyx_int_3); - __Pyx_GIVEREF(__pyx_int_3); - PyList_SET_ITEM(__pyx_t_5, 1, __pyx_int_3); - __pyx_t_13 = __Pyx_PyObject_GetItem(__pyx_v_b, __pyx_t_5); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[4] = {__pyx_t_5, __pyx_t_13, __pyx_int_0, __pyx_v_h}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyList_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_int_1); - __Pyx_INCREF(__pyx_int_3); - __Pyx_GIVEREF(__pyx_int_3); - PyList_SET_ITEM(__pyx_t_2, 1, __pyx_int_3); - if (unlikely((PyObject_SetItem(__pyx_v_b, __pyx_t_2, __pyx_t_7) < 0))) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":359 - * b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - * b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - * assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' # <<<<<<<<<<<<<< - * - * - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_cv2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_13 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_imwrite); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Str(__pyx_v_f); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_b, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_24 = __Pyx_GetItemInt(__pyx_v_b, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_24)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_24); - __pyx_t_18 = PySlice_New(__pyx_t_5, __pyx_t_24, Py_None); if (unlikely(!__pyx_t_18)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_18); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_24); __pyx_t_24 = 0; - __pyx_t_24 = __Pyx_GetItemInt(__pyx_v_b, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_24)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_24); - __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_b, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_25 = PySlice_New(__pyx_t_24, __pyx_t_5, Py_None); if (unlikely(!__pyx_t_25)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_25); - __Pyx_DECREF(__pyx_t_24); __pyx_t_24 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_18); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_18); - __Pyx_GIVEREF(__pyx_t_25); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_25); - __pyx_t_18 = 0; - __pyx_t_25 = 0; - __pyx_t_25 = __Pyx_PyObject_GetItem(__pyx_v_im, __pyx_t_5); if (unlikely(!__pyx_t_25)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_25); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_13))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_13); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_13); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_13, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[3] = {__pyx_t_5, __pyx_t_2, __pyx_t_25}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_13, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_25); __pyx_t_25 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - } - __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_t_7); if (unlikely((__pyx_t_10 < 0))) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_10)) { - __pyx_t_7 = __Pyx_PyObject_FormatSimple(__pyx_v_f, __pyx_empty_unicode); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_13 = __Pyx_PyUnicode_Concat(__pyx_kp_u_box_failure_in, __pyx_t_7); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_Raise(__pyx_builtin_AssertionError, __pyx_t_13, 0, 0); - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __PYX_ERR(0, 359, __pyx_L1_error) - } - } - #else - if ((1)); else __PYX_ERR(0, 359, __pyx_L1_error) - #endif - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":346 - * lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - * - * for j, x in enumerate(lb): # <<<<<<<<<<<<<< - * c = int(x[0]) # class - * f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - */ - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":342 - * # labels - * lb_file = Path(img2label_paths([str(im_file)])[0]) - * if Path(lb_file).exists(): # <<<<<<<<<<<<<< - * with open(lb_file) as f: - * lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":335 - * n = len(files) # number of files - * for im_file in tqdm(files, total=n): - * if im_file.suffix[1:] in IMG_FORMATS: # <<<<<<<<<<<<<< - * # image - * im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":334 - * files = list(path.rglob('*.*')) - * n = len(files) # number of files - * for im_file in tqdm(files, total=n): # <<<<<<<<<<<<<< - * if im_file.suffix[1:] in IMG_FORMATS: - * # image - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":328 - * - * - * def extract_boxes(path='../datasets/coco128'): # from utils.datasets import *; extract_boxes() # <<<<<<<<<<<<<< - * # Convert detection dataset into classification dataset, with one directory per class - * path = Path(path) # images dir - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_13); - __Pyx_XDECREF(__pyx_t_18); - __Pyx_XDECREF(__pyx_t_24); - __Pyx_XDECREF(__pyx_t_25); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.extract_boxes", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_files); - __Pyx_XDECREF(__pyx_v_im_file); - __Pyx_XDECREF(__pyx_v_im); - __Pyx_XDECREF(__pyx_v_h); - __Pyx_XDECREF(__pyx_v_w); - __Pyx_XDECREF(__pyx_v_lb_file); - __Pyx_XDECREF(__pyx_v_f); - __Pyx_XDECREF(__pyx_v_lb); - __Pyx_XDECREF(__pyx_v_j); - __Pyx_XDECREF(__pyx_v_x); - __Pyx_XDECREF(__pyx_v_c); - __Pyx_XDECREF(__pyx_v_b); - __Pyx_XDECREF(__pyx_9genexpr10__pyx_v_x); - __Pyx_XDECREF(__pyx_v_path); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":362 - * - * - * def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False): # <<<<<<<<<<<<<< - * """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - * Usage: from utils.datasets import *; autosplit() - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_21autosplit(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_20autosplit, " Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files\n Usage: from utils.datasets import *; autosplit()\n Arguments\n path: Path to images directory\n weights: Train, val, test weights (list, tuple)\n annotated_only: Only use images with an annotated txt file\n "); -static PyMethodDef __pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_21autosplit = {"autosplit", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_21autosplit, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_20autosplit}; -static PyObject *__pyx_pw_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_21autosplit(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_path = 0; - PyObject *__pyx_v_weights = 0; - PyObject *__pyx_v_annotated_only = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED const Py_ssize_t __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("autosplit (wrapper)", 0); - { - #if CYTHON_USE_MODULE_STATE - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path,&__pyx_n_s_weights,&__pyx_n_s_annotated_only,0}; - #else - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path,&__pyx_n_s_weights,&__pyx_n_s_annotated_only,0}; - #endif - PyObject* values[3] = {0,0,0}; - values[0] = ((PyObject *)((PyObject*)__pyx_kp_u_datasets_coco128_images)); - values[1] = ((PyObject *)((PyObject*)__pyx_tuple__24)); - values[2] = ((PyObject *)((PyObject *)Py_False)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_path); - if (value) { values[0] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 362, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 1: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_weights); - if (value) { values[1] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 362, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_annotated_only); - if (value) { values[2] = value; kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 362, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "autosplit") < 0)) __PYX_ERR(0, 362, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_path = values[0]; - __pyx_v_weights = values[1]; - __pyx_v_annotated_only = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("autosplit", 0, 0, 3, __pyx_nargs); __PYX_ERR(0, 362, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.autosplit", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_20autosplit(__pyx_self, __pyx_v_path, __pyx_v_weights, __pyx_v_annotated_only); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9autosplit_2generator4(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":371 - * """ - * path = Path(path) # images dir - * files = sorted(x for x in path.rglob('*.*') if x.suffix[1:].lower() in IMG_FORMATS) # image files only # <<<<<<<<<<<<<< - * n = len(files) # number of files - * random.seed(0) # for reproducibility - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9autosplit_genexpr(PyObject *__pyx_self) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 371, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_outer_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *) __pyx_self; - __Pyx_INCREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - __Pyx_GIVEREF((PyObject *)__pyx_cur_scope->__pyx_outer_scope); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9autosplit_2generator4, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_autosplit_locals_genexpr, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils); if (unlikely(!gen)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.autosplit.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9autosplit_2generator4(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *__pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 371, __pyx_L1_error) - __pyx_r = PyList_New(0); if (unlikely(!__pyx_r)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_r); - if (unlikely(!__pyx_cur_scope->__pyx_outer_scope->__pyx_v_path)) { __Pyx_RaiseClosureNameError("path"); __PYX_ERR(0, 371, __pyx_L1_error) } - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_outer_scope->__pyx_v_path, __pyx_n_s_rglob); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_kp_u__4}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 371, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_1); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 371, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_1); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 371, __pyx_L1_error) - #else - __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_6(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 371, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_x, __pyx_n_s_suffix); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_7 = __Pyx_PyObject_GetSlice(__pyx_t_3, 1, 0, NULL, NULL, &__pyx_slice__16, 1, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_lower); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_7, }; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_IMG_FORMATS); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = (__Pyx_PySequence_ContainsTF(__pyx_t_1, __pyx_t_3, Py_EQ)); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_9 = (__pyx_t_8 != 0); - if (__pyx_t_9) { - if (unlikely(__Pyx_ListComp_Append(__pyx_r, (PyObject*)__pyx_cur_scope->__pyx_v_x))) __PYX_ERR(0, 371, __pyx_L1_error) - } - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":362 - * - * - * def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False): # <<<<<<<<<<<<<< - * """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - * Usage: from utils.datasets import *; autosplit() - */ - -static PyObject *__pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_20autosplit(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_path, PyObject *__pyx_v_weights, PyObject *__pyx_v_annotated_only) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *__pyx_cur_scope; - PyObject *__pyx_v_files = NULL; - Py_ssize_t __pyx_v_n; - PyObject *__pyx_v_indices = NULL; - PyObject *__pyx_v_txt = NULL; - PyObject *__pyx_v_i = NULL; - PyObject *__pyx_v_img = NULL; - PyObject *__pyx_v_f = NULL; - PyObject *__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9autosplit_2generator4 = 0; - PyObject *__pyx_9genexpr12__pyx_v_x = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - Py_ssize_t __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *(*__pyx_t_9)(PyObject *); - PyObject *(*__pyx_t_10)(PyObject *); - int __pyx_t_11; - int __pyx_t_12; - int __pyx_t_13; - PyObject *__pyx_t_14 = NULL; - PyObject *__pyx_t_15 = NULL; - PyObject *__pyx_t_16 = NULL; - PyObject *__pyx_t_17 = NULL; - PyObject *__pyx_t_18 = NULL; - PyObject *__pyx_t_19 = NULL; - PyObject *__pyx_t_20 = NULL; - PyObject *__pyx_t_21 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("autosplit", 0); - __pyx_cur_scope = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 362, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_path = __pyx_v_path; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_path); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_path); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":370 - * annotated_only: Only use images with an annotated txt file - * """ - * path = Path(path) # images dir # <<<<<<<<<<<<<< - * files = sorted(x for x in path.rglob('*.*') if x.suffix[1:].lower() in IMG_FORMATS) # image files only - * n = len(files) # number of files - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Path); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 370, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_cur_scope->__pyx_v_path}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 370, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_GOTREF(__pyx_cur_scope->__pyx_v_path); - __Pyx_DECREF_SET(__pyx_cur_scope->__pyx_v_path, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":371 - * """ - * path = Path(path) # images dir - * files = sorted(x for x in path.rglob('*.*') if x.suffix[1:].lower() in IMG_FORMATS) # image files only # <<<<<<<<<<<<<< - * n = len(files) # number of files - * random.seed(0) # for reproducibility - */ - __pyx_t_2 = __pyx_pf_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9autosplit_genexpr(((PyObject*)__pyx_cur_scope)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_Generator_Next(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 371, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_1 = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_5 = PyList_Sort(__pyx_t_1); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(0, 371, __pyx_L1_error) - __pyx_v_files = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":372 - * path = Path(path) # images dir - * files = sorted(x for x in path.rglob('*.*') if x.suffix[1:].lower() in IMG_FORMATS) # image files only - * n = len(files) # number of files # <<<<<<<<<<<<<< - * random.seed(0) # for reproducibility - * indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - */ - __pyx_t_6 = PyList_GET_SIZE(__pyx_v_files); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 372, __pyx_L1_error) - __pyx_v_n = __pyx_t_6; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":373 - * files = sorted(x for x in path.rglob('*.*') if x.suffix[1:].lower() in IMG_FORMATS) # image files only - * n = len(files) # number of files - * random.seed(0) # for reproducibility # <<<<<<<<<<<<<< - * indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_random); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 373, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_seed); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 373, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_int_0}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 373, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":374 - * n = len(files) # number of files - * random.seed(0) # for reproducibility - * indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split # <<<<<<<<<<<<<< - * - * txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_random); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_choices); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyList_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_int_0); - __Pyx_INCREF(__pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - PyList_SET_ITEM(__pyx_t_1, 1, __pyx_int_1); - __Pyx_INCREF(__pyx_int_2); - __Pyx_GIVEREF(__pyx_int_2); - PyList_SET_ITEM(__pyx_t_1, 2, __pyx_int_2); - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_weights, __pyx_v_weights) < 0) __PYX_ERR(0, 374, __pyx_L1_error) - __pyx_t_7 = PyInt_FromSsize_t(__pyx_v_n); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_k, __pyx_t_7) < 0) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 374, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_indices = __pyx_t_7; - __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":376 - * indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - * - * txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files # <<<<<<<<<<<<<< - * [(path.parent / x).unlink(missing_ok=True) for x in txt] # remove existing - * - */ - __pyx_t_7 = PyList_New(3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 376, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_INCREF(__pyx_kp_u_autosplit_train_txt); - __Pyx_GIVEREF(__pyx_kp_u_autosplit_train_txt); - PyList_SET_ITEM(__pyx_t_7, 0, __pyx_kp_u_autosplit_train_txt); - __Pyx_INCREF(__pyx_kp_u_autosplit_val_txt); - __Pyx_GIVEREF(__pyx_kp_u_autosplit_val_txt); - PyList_SET_ITEM(__pyx_t_7, 1, __pyx_kp_u_autosplit_val_txt); - __Pyx_INCREF(__pyx_kp_u_autosplit_test_txt); - __Pyx_GIVEREF(__pyx_kp_u_autosplit_test_txt); - PyList_SET_ITEM(__pyx_t_7, 2, __pyx_kp_u_autosplit_test_txt); - __pyx_v_txt = ((PyObject*)__pyx_t_7); - __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":377 - * - * txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - * [(path.parent / x).unlink(missing_ok=True) for x in txt] # remove existing # <<<<<<<<<<<<<< - * - * print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - */ - { /* enter inner scope */ - __pyx_t_7 = PyList_New(0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 377, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = __pyx_v_txt; __Pyx_INCREF(__pyx_t_1); __pyx_t_6 = 0; - for (;;) { - if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_3); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(0, 377, __pyx_L5_error) - #else - __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 377, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_XDECREF_SET(__pyx_9genexpr12__pyx_v_x, __pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_path, __pyx_n_s_parent); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 377, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyNumber_Divide(__pyx_t_3, __pyx_9genexpr12__pyx_v_x); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 377, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_unlink); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 377, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 377, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_missing_ok, Py_True) < 0) __PYX_ERR(0, 377, __pyx_L5_error) - __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_empty_tuple, __pyx_t_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 377, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_7, (PyObject*)__pyx_t_8))) __PYX_ERR(0, 377, __pyx_L5_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_9genexpr12__pyx_v_x); __pyx_9genexpr12__pyx_v_x = 0; - goto __pyx_L8_exit_scope; - __pyx_L5_error:; - __Pyx_XDECREF(__pyx_9genexpr12__pyx_v_x); __pyx_9genexpr12__pyx_v_x = 0; - goto __pyx_L1_error; - __pyx_L8_exit_scope:; - } /* exit inner scope */ - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":379 - * [(path.parent / x).unlink(missing_ok=True) for x in txt] # remove existing - * - * print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) # <<<<<<<<<<<<<< - * for i, img in tqdm(zip(indices, files), total=n): - * if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - */ - __pyx_t_7 = __Pyx_PyObject_FormatSimple(__pyx_cur_scope->__pyx_v_path, __pyx_empty_unicode); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Autosplitting_images_from, __pyx_t_7); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = PyNumber_Multiply(__pyx_kp_u_using_txt_labeled_images_only, __pyx_v_annotated_only); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = PyNumber_Add(__pyx_t_1, __pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_8); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":380 - * - * print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - * for i, img in tqdm(zip(indices, files), total=n): # <<<<<<<<<<<<<< - * if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - * with open(path.parent / txt[i], 'a') as f: - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_tqdm); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = PyTuple_New(2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_indices); - __Pyx_GIVEREF(__pyx_v_indices); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_indices); - __Pyx_INCREF(__pyx_v_files); - __Pyx_GIVEREF(__pyx_v_files); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_v_files); - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_zip, __pyx_t_8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyTuple_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_total, __pyx_t_2) < 0) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_8, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) { - __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); __pyx_t_6 = 0; - __pyx_t_9 = NULL; - } else { - __pyx_t_6 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 380, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (likely(!__pyx_t_9)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(0, 380, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - if (__pyx_t_6 >= PyTuple_GET_SIZE(__pyx_t_1)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(0, 380, __pyx_L1_error) - #else - __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_9(__pyx_t_1); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 380, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 380, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_8 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_8 = PyList_GET_ITEM(sequence, 0); - __pyx_t_7 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(__pyx_t_7); - #else - __pyx_t_8 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_7 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 380, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_10 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_8 = __pyx_t_10(__pyx_t_3); if (unlikely(!__pyx_t_8)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_8); - index = 1; __pyx_t_7 = __pyx_t_10(__pyx_t_3); if (unlikely(!__pyx_t_7)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_7); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_10(__pyx_t_3), 2) < 0) __PYX_ERR(0, 380, __pyx_L1_error) - __pyx_t_10 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L12_unpacking_done; - __pyx_L11_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_10 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 380, __pyx_L1_error) - __pyx_L12_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_8); - __pyx_t_8 = 0; - __Pyx_XDECREF_SET(__pyx_v_img, __pyx_t_7); - __pyx_t_7 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":381 - * print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - * for i, img in tqdm(zip(indices, files), total=n): - * if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label # <<<<<<<<<<<<<< - * with open(path.parent / txt[i], 'a') as f: - * f.write('./' + img.relative_to(path.parent).as_posix() + '\n') # add image to txt file - */ - __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_v_annotated_only); if (unlikely((__pyx_t_12 < 0))) __PYX_ERR(0, 381, __pyx_L1_error) - __pyx_t_13 = ((!__pyx_t_12) != 0); - if (!__pyx_t_13) { - } else { - __pyx_t_11 = __pyx_t_13; - goto __pyx_L14_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_Path); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GetModuleGlobalName(__pyx_t_14, __pyx_n_s_img2label_paths); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __pyx_t_15 = __Pyx_PyObject_Str(__pyx_v_img); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_15); - __pyx_t_16 = PyList_New(1); if (unlikely(!__pyx_t_16)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_16); - __Pyx_GIVEREF(__pyx_t_15); - PyList_SET_ITEM(__pyx_t_16, 0, __pyx_t_15); - __pyx_t_15 = 0; - __pyx_t_15 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_14))) { - __pyx_t_15 = PyMethod_GET_SELF(__pyx_t_14); - if (likely(__pyx_t_15)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_14); - __Pyx_INCREF(__pyx_t_15); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_14, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_15, __pyx_t_16}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_14, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0; - __Pyx_DECREF(__pyx_t_16); __pyx_t_16 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - } - __pyx_t_14 = __Pyx_GetItemInt(__pyx_t_3, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_14}; - __pyx_t_7 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_exists); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_8); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_8, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_7, }; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_8, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __pyx_t_13 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_13 < 0))) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_11 = __pyx_t_13; - __pyx_L14_bool_binop_done:; - if (__pyx_t_11) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":382 - * for i, img in tqdm(zip(indices, files), total=n): - * if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - * with open(path.parent / txt[i], 'a') as f: # <<<<<<<<<<<<<< - * f.write('./' + img.relative_to(path.parent).as_posix() + '\n') # add image to txt file - * - */ - /*with:*/ { - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_path, __pyx_n_s_parent); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 382, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = __Pyx_PyObject_GetItem(__pyx_v_txt, __pyx_v_i); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 382, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_7 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_t_8); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 382, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyTuple_New(2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 382, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_7); - __Pyx_INCREF(__pyx_n_u_a); - __Pyx_GIVEREF(__pyx_n_u_a); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_n_u_a); - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(__pyx_builtin_open, __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 382, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_17 = __Pyx_PyObject_LookupSpecial(__pyx_t_7, __pyx_n_s_exit); if (unlikely(!__pyx_t_17)) __PYX_ERR(0, 382, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_17); - __pyx_t_2 = __Pyx_PyObject_LookupSpecial(__pyx_t_7, __pyx_n_s_enter); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 382, __pyx_L16_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_14 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_14 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_14)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_14); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_14, }; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 382, __pyx_L16_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = __pyx_t_8; - __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - /*try:*/ { - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_18, &__pyx_t_19, &__pyx_t_20); - __Pyx_XGOTREF(__pyx_t_18); - __Pyx_XGOTREF(__pyx_t_19); - __Pyx_XGOTREF(__pyx_t_20); - /*try:*/ { - __Pyx_XDECREF_SET(__pyx_v_f, __pyx_t_2); - __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":383 - * if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - * with open(path.parent / txt[i], 'a') as f: - * f.write('./' + img.relative_to(path.parent).as_posix() + '\n') # add image to txt file # <<<<<<<<<<<<<< - * - */ - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_f, __pyx_n_s_write); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 383, __pyx_L22_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_img, __pyx_n_s_relative_to); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 383, __pyx_L22_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_16 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_path, __pyx_n_s_parent); if (unlikely(!__pyx_t_16)) __PYX_ERR(0, 383, __pyx_L22_error) - __Pyx_GOTREF(__pyx_t_16); - __pyx_t_15 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_15 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_15)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_15); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_15, __pyx_t_16}; - __pyx_t_14 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0; - __Pyx_DECREF(__pyx_t_16); __pyx_t_16 = 0; - if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 383, __pyx_L22_error) - __Pyx_GOTREF(__pyx_t_14); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_14, __pyx_n_s_as_posix); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 383, __pyx_L22_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - __pyx_t_14 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_14 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_14)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_14); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[1] = {__pyx_t_14, }; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 0+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 383, __pyx_L22_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = PyNumber_Add(__pyx_kp_u__25, __pyx_t_8); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 383, __pyx_L22_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyNumber_Add(__pyx_t_3, __pyx_kp_u__26); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 383, __pyx_L22_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_4 = 1; - } - } - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_t_8}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 383, __pyx_L22_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":382 - * for i, img in tqdm(zip(indices, files), total=n): - * if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - * with open(path.parent / txt[i], 'a') as f: # <<<<<<<<<<<<<< - * f.write('./' + img.relative_to(path.parent).as_posix() + '\n') # add image to txt file - * - */ - } - __Pyx_XDECREF(__pyx_t_18); __pyx_t_18 = 0; - __Pyx_XDECREF(__pyx_t_19); __pyx_t_19 = 0; - __Pyx_XDECREF(__pyx_t_20); __pyx_t_20 = 0; - goto __pyx_L29_try_end; - __pyx_L22_error:; - __Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0; - __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0; - __Pyx_XDECREF(__pyx_t_16); __pyx_t_16 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - /*except:*/ { - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.autosplit", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_2, &__pyx_t_7, &__pyx_t_8) < 0) __PYX_ERR(0, 382, __pyx_L24_except_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = PyTuple_Pack(3, __pyx_t_2, __pyx_t_7, __pyx_t_8); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 382, __pyx_L24_except_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_21 = __Pyx_PyObject_Call(__pyx_t_17, __pyx_t_3, NULL); - __Pyx_DECREF(__pyx_t_17); __pyx_t_17 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_21)) __PYX_ERR(0, 382, __pyx_L24_except_error) - __Pyx_GOTREF(__pyx_t_21); - __pyx_t_11 = __Pyx_PyObject_IsTrue(__pyx_t_21); - __Pyx_DECREF(__pyx_t_21); __pyx_t_21 = 0; - if (__pyx_t_11 < 0) __PYX_ERR(0, 382, __pyx_L24_except_error) - __pyx_t_13 = ((!(__pyx_t_11 != 0)) != 0); - if (unlikely(__pyx_t_13)) { - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_7); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_ErrRestoreWithState(__pyx_t_2, __pyx_t_7, __pyx_t_8); - __pyx_t_2 = 0; __pyx_t_7 = 0; __pyx_t_8 = 0; - __PYX_ERR(0, 382, __pyx_L24_except_error) - } - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L23_exception_handled; - } - __pyx_L24_except_error:; - __Pyx_XGIVEREF(__pyx_t_18); - __Pyx_XGIVEREF(__pyx_t_19); - __Pyx_XGIVEREF(__pyx_t_20); - __Pyx_ExceptionReset(__pyx_t_18, __pyx_t_19, __pyx_t_20); - goto __pyx_L1_error; - __pyx_L23_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_18); - __Pyx_XGIVEREF(__pyx_t_19); - __Pyx_XGIVEREF(__pyx_t_20); - __Pyx_ExceptionReset(__pyx_t_18, __pyx_t_19, __pyx_t_20); - __pyx_L29_try_end:; - } - } - /*finally:*/ { - /*normal exit:*/{ - if (__pyx_t_17) { - __pyx_t_20 = __Pyx_PyObject_Call(__pyx_t_17, __pyx_tuple__20, NULL); - __Pyx_DECREF(__pyx_t_17); __pyx_t_17 = 0; - if (unlikely(!__pyx_t_20)) __PYX_ERR(0, 382, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_20); - __Pyx_DECREF(__pyx_t_20); __pyx_t_20 = 0; - } - goto __pyx_L21; - } - __pyx_L21:; - } - goto __pyx_L33; - __pyx_L16_error:; - __Pyx_DECREF(__pyx_t_17); __pyx_t_17 = 0; - goto __pyx_L1_error; - __pyx_L33:; - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":381 - * print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - * for i, img in tqdm(zip(indices, files), total=n): - * if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label # <<<<<<<<<<<<<< - * with open(path.parent / txt[i], 'a') as f: - * f.write('./' + img.relative_to(path.parent).as_posix() + '\n') # add image to txt file - */ - } - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":380 - * - * print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - * for i, img in tqdm(zip(indices, files), total=n): # <<<<<<<<<<<<<< - * if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - * with open(path.parent / txt[i], 'a') as f: - */ - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":362 - * - * - * def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False): # <<<<<<<<<<<<<< - * """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - * Usage: from utils.datasets import *; autosplit() - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_14); - __Pyx_XDECREF(__pyx_t_15); - __Pyx_XDECREF(__pyx_t_16); - __Pyx_AddTraceback("pdf_toolbox.lib.dia_yolov5.utils.datasets.autosplit", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_files); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XDECREF(__pyx_v_txt); - __Pyx_XDECREF(__pyx_v_i); - __Pyx_XDECREF(__pyx_v_img); - __Pyx_XDECREF(__pyx_v_f); - __Pyx_XDECREF(__pyx_gb_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9autosplit_2generator4); - __Pyx_XDECREF(__pyx_9genexpr12__pyx_v_x); - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_paths); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *)o; - if (p->__pyx_v_paths) { - e = (*v)(p->__pyx_v_paths, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash *)o; - tmp = ((PyObject*)p->__pyx_v_paths); - p->__pyx_v_paths = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash}, - {Py_tp_clear, (void *)__pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash_spec = { - "pdf_toolbox.lib.dia_yolov5.utils.datasets.__pyx_scope_struct__get_hash", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.utils.datasets.""__pyx_scope_struct__get_hash", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash, /*tp_traverse*/ - __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_p); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_p) { - e = (*v)(p->__pyx_v_p, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.utils.datasets.__pyx_scope_struct_1_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.utils.datasets.""__pyx_scope_struct_1_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_s); - Py_CLEAR(p->__pyx_v_self); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *)o; - if (p->__pyx_v_s) { - e = (*v)(p->__pyx_v_s, a); if (e) return e; - } - if (p->__pyx_v_self) { - e = (*v)(p->__pyx_v_self, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic *)o; - tmp = ((PyObject*)p->__pyx_v_s); - p->__pyx_v_s = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_self); - p->__pyx_v_self = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic}, - {Py_tp_clear, (void *)__pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic_spec = { - "pdf_toolbox.lib.dia_yolov5.utils.datasets.__pyx_scope_struct_2_load_mosaic", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.utils.datasets.""__pyx_scope_struct_2_load_mosaic", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic, /*tp_traverse*/ - __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.utils.datasets.__pyx_scope_struct_3_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.utils.datasets.""__pyx_scope_struct_3_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_c); - Py_CLEAR(p->__pyx_v_s); - Py_CLEAR(p->__pyx_v_self); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *)o; - if (p->__pyx_v_c) { - e = (*v)(p->__pyx_v_c, a); if (e) return e; - } - if (p->__pyx_v_s) { - e = (*v)(p->__pyx_v_s, a); if (e) return e; - } - if (p->__pyx_v_self) { - e = (*v)(p->__pyx_v_self, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 *)o; - tmp = ((PyObject*)p->__pyx_v_c); - p->__pyx_v_c = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_s); - p->__pyx_v_s = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->__pyx_v_self); - p->__pyx_v_self = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9}, - {Py_tp_clear, (void *)__pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9_spec = { - "pdf_toolbox.lib.dia_yolov5.utils.datasets.__pyx_scope_struct_4_load_mosaic9", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.utils.datasets.""__pyx_scope_struct_4_load_mosaic9", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9, /*tp_traverse*/ - __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.utils.datasets.__pyx_scope_struct_5_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.utils.datasets.""__pyx_scope_struct_5_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v__); - Py_CLEAR(p->__pyx_t_0); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v__) { - e = (*v)(p->__pyx_v__, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.utils.datasets.__pyx_scope_struct_6_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.utils.datasets.""__pyx_scope_struct_6_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_path); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *)o; - if (p->__pyx_v_path) { - e = (*v)(p->__pyx_v_path, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit *)o; - tmp = ((PyObject*)p->__pyx_v_path); - p->__pyx_v_path = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit}, - {Py_tp_clear, (void *)__pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit_spec = { - "pdf_toolbox.lib.dia_yolov5.utils.datasets.__pyx_scope_struct_7_autosplit", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.utils.datasets.""__pyx_scope_struct_7_autosplit", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit, /*tp_traverse*/ - __pyx_tp_clear_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr[8]; -static int __pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr = 0; - -static PyObject *__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - if (CYTHON_COMPILING_IN_CPYTHON && likely((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr > 0) & (t->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr)))) { - o = (PyObject*)__pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr[--__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr(PyObject *o) { - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *)o; - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_outer_scope); - Py_CLEAR(p->__pyx_v_x); - if (CYTHON_COMPILING_IN_CPYTHON && ((__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr < 8) & (Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr)))) { - __pyx_freelist_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr[__pyx_freecount_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr++] = ((struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *)o); - } else { - (*Py_TYPE(o)->tp_free)(o); - } -} - -static int __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *p = (struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr *)o; - if (p->__pyx_outer_scope) { - e = (*v)(((PyObject *)p->__pyx_outer_scope), a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr_spec = { - "pdf_toolbox.lib.dia_yolov5.utils.datasets.__pyx_scope_struct_8_genexpr", - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, - __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "pdf_toolbox.lib.dia_yolov5.utils.datasets.""__pyx_scope_struct_8_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif -/* #### Code section: pystring_table ### */ - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - #if CYTHON_USE_MODULE_STATE - {0, __pyx_k_, sizeof(__pyx_k_), 0, 1, 0, 0}, - {0, __pyx_k_AssertionError, sizeof(__pyx_k_AssertionError), 0, 0, 1, 1}, - {0, __pyx_k_Autosplitting_images_from, sizeof(__pyx_k_Autosplitting_images_from), 0, 1, 0, 0}, - {0, __pyx_k_CAP_PROP_FRAME_COUNT, sizeof(__pyx_k_CAP_PROP_FRAME_COUNT), 0, 0, 1, 1}, - {0, __pyx_k_ERROR, sizeof(__pyx_k_ERROR), 0, 1, 0, 0}, - {0, __pyx_k_ExifTags, sizeof(__pyx_k_ExifTags), 0, 0, 1, 1}, - {0, __pyx_k_F, sizeof(__pyx_k_F), 0, 0, 1, 1}, - {0, __pyx_k_FLIP_LEFT_RIGHT, sizeof(__pyx_k_FLIP_LEFT_RIGHT), 0, 0, 1, 1}, - {0, __pyx_k_FLIP_TOP_BOTTOM, sizeof(__pyx_k_FLIP_TOP_BOTTOM), 0, 0, 1, 1}, - {0, __pyx_k_HELP_URL, sizeof(__pyx_k_HELP_URL), 0, 0, 1, 1}, - {0, __pyx_k_IMG_FORMATS, sizeof(__pyx_k_IMG_FORMATS), 0, 0, 1, 1}, - {0, __pyx_k_INTER_AREA, sizeof(__pyx_k_INTER_AREA), 0, 0, 1, 1}, - {0, __pyx_k_INTER_LINEAR, sizeof(__pyx_k_INTER_LINEAR), 0, 0, 1, 1}, - {0, __pyx_k_Image, sizeof(__pyx_k_Image), 0, 0, 1, 1}, - {0, __pyx_k_ImageOps, sizeof(__pyx_k_ImageOps), 0, 0, 1, 1}, - {0, __pyx_k_Image_Not_Found, sizeof(__pyx_k_Image_Not_Found), 0, 1, 0, 0}, - {0, __pyx_k_LoadImages, sizeof(__pyx_k_LoadImages), 0, 0, 1, 1}, - {0, __pyx_k_LoadImages___init, sizeof(__pyx_k_LoadImages___init), 0, 0, 1, 1}, - {0, __pyx_k_LoadImages___iter, sizeof(__pyx_k_LoadImages___iter), 0, 0, 1, 1}, - {0, __pyx_k_LoadImages___len, sizeof(__pyx_k_LoadImages___len), 0, 0, 1, 1}, - {0, __pyx_k_LoadImages___next, sizeof(__pyx_k_LoadImages___next), 0, 0, 1, 1}, - {0, __pyx_k_LoadImages_new_video, sizeof(__pyx_k_LoadImages_new_video), 0, 0, 1, 1}, - {0, __pyx_k_No_images_or_videos_found_in, sizeof(__pyx_k_No_images_or_videos_found_in), 0, 1, 0, 0}, - {0, __pyx_k_Orientation, sizeof(__pyx_k_Orientation), 0, 1, 0, 1}, - {0, __pyx_k_PIL, sizeof(__pyx_k_PIL), 0, 0, 1, 1}, - {0, __pyx_k_Path, sizeof(__pyx_k_Path), 0, 0, 1, 1}, - {0, __pyx_k_Pool, sizeof(__pyx_k_Pool), 0, 0, 1, 1}, - {0, __pyx_k_ROTATE_180, sizeof(__pyx_k_ROTATE_180), 0, 0, 1, 1}, - {0, __pyx_k_ROTATE_270, sizeof(__pyx_k_ROTATE_270), 0, 0, 1, 1}, - {0, __pyx_k_ROTATE_90, sizeof(__pyx_k_ROTATE_90), 0, 0, 1, 1}, - {0, __pyx_k_StopIteration, sizeof(__pyx_k_StopIteration), 0, 0, 1, 1}, - {0, __pyx_k_Supported_formats_are_images, sizeof(__pyx_k_Supported_formats_are_images), 0, 1, 0, 0}, - {0, __pyx_k_TAGS, sizeof(__pyx_k_TAGS), 0, 0, 1, 1}, - {0, __pyx_k_TRANSPOSE, sizeof(__pyx_k_TRANSPOSE), 0, 0, 1, 1}, - {0, __pyx_k_TRANSVERSE, sizeof(__pyx_k_TRANSVERSE), 0, 0, 1, 1}, - {0, __pyx_k_Thread, sizeof(__pyx_k_Thread), 0, 0, 1, 1}, - {0, __pyx_k_ThreadPool, sizeof(__pyx_k_ThreadPool), 0, 0, 1, 1}, - {0, __pyx_k_VID_FORMATS, sizeof(__pyx_k_VID_FORMATS), 0, 0, 1, 1}, - {0, __pyx_k_VideoCapture, sizeof(__pyx_k_VideoCapture), 0, 0, 1, 1}, - {0, __pyx_k_ZipFile, sizeof(__pyx_k_ZipFile), 0, 0, 1, 1}, - {0, __pyx_k__10, sizeof(__pyx_k__10), 0, 1, 0, 0}, - {0, __pyx_k__18, sizeof(__pyx_k__18), 0, 1, 0, 0}, - {0, __pyx_k__21, sizeof(__pyx_k__21), 0, 0, 1, 1}, - {0, __pyx_k__21, sizeof(__pyx_k__21), 0, 1, 0, 1}, - {0, __pyx_k__25, sizeof(__pyx_k__25), 0, 1, 0, 0}, - {0, __pyx_k__26, sizeof(__pyx_k__26), 0, 1, 0, 0}, - {0, __pyx_k__3, sizeof(__pyx_k__3), 0, 0, 1, 1}, - {0, __pyx_k__3, sizeof(__pyx_k__3), 0, 1, 0, 0}, - {0, __pyx_k__4, sizeof(__pyx_k__4), 0, 1, 0, 0}, - {0, __pyx_k__5, sizeof(__pyx_k__5), 0, 1, 0, 0}, - {0, __pyx_k__6, sizeof(__pyx_k__6), 0, 1, 0, 0}, - {0, __pyx_k__64, sizeof(__pyx_k__64), 0, 0, 1, 1}, - {0, __pyx_k__7, sizeof(__pyx_k__7), 0, 1, 0, 0}, - {0, __pyx_k__8, sizeof(__pyx_k__8), 0, 1, 0, 0}, - {0, __pyx_k__9, sizeof(__pyx_k__9), 0, 1, 0, 0}, - {0, __pyx_k_a, sizeof(__pyx_k_a), 0, 1, 0, 1}, - {0, __pyx_k_annotated_only, sizeof(__pyx_k_annotated_only), 0, 0, 1, 1}, - {0, __pyx_k_any, sizeof(__pyx_k_any), 0, 0, 1, 1}, - {0, __pyx_k_append, sizeof(__pyx_k_append), 0, 0, 1, 1}, - {0, __pyx_k_args, sizeof(__pyx_k_args), 0, 0, 1, 1}, - {0, __pyx_k_array, sizeof(__pyx_k_array), 0, 0, 1, 1}, - {0, __pyx_k_as_posix, sizeof(__pyx_k_as_posix), 0, 0, 1, 1}, - {0, __pyx_k_ascontiguousarray, sizeof(__pyx_k_ascontiguousarray), 0, 0, 1, 1}, - {0, __pyx_k_asf, sizeof(__pyx_k_asf), 0, 1, 0, 1}, - {0, __pyx_k_astype, sizeof(__pyx_k_astype), 0, 0, 1, 1}, - {0, __pyx_k_asyncio_coroutines, sizeof(__pyx_k_asyncio_coroutines), 0, 0, 1, 1}, - {0, __pyx_k_augment, sizeof(__pyx_k_augment), 0, 0, 1, 1}, - {0, __pyx_k_auto, sizeof(__pyx_k_auto), 0, 0, 1, 1}, - {0, __pyx_k_autosplit, sizeof(__pyx_k_autosplit), 0, 0, 1, 1}, - {0, __pyx_k_autosplit_locals_genexpr, sizeof(__pyx_k_autosplit_locals_genexpr), 0, 0, 1, 1}, - {0, __pyx_k_autosplit_test_txt, sizeof(__pyx_k_autosplit_test_txt), 0, 1, 0, 0}, - {0, __pyx_k_autosplit_train_txt, sizeof(__pyx_k_autosplit_train_txt), 0, 1, 0, 0}, - {0, __pyx_k_autosplit_val_txt, sizeof(__pyx_k_autosplit_val_txt), 0, 1, 0, 0}, - {0, __pyx_k_avi, sizeof(__pyx_k_avi), 0, 1, 0, 1}, - {0, __pyx_k_b, sizeof(__pyx_k_b), 0, 0, 1, 1}, - {0, __pyx_k_bmp, sizeof(__pyx_k_bmp), 0, 1, 0, 1}, - {0, __pyx_k_box_failure_in, sizeof(__pyx_k_box_failure_in), 0, 1, 0, 0}, - {0, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {0, __pyx_k_cap, sizeof(__pyx_k_cap), 0, 0, 1, 1}, - {0, __pyx_k_choices, sizeof(__pyx_k_choices), 0, 0, 1, 1}, - {0, __pyx_k_class_getitem, sizeof(__pyx_k_class_getitem), 0, 0, 1, 1}, - {0, __pyx_k_classifier, sizeof(__pyx_k_classifier), 0, 1, 0, 1}, - {0, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {0, __pyx_k_clip, sizeof(__pyx_k_clip), 0, 0, 1, 1}, - {0, __pyx_k_close, sizeof(__pyx_k_close), 0, 0, 1, 1}, - {0, __pyx_k_concatenate, sizeof(__pyx_k_concatenate), 0, 0, 1, 1}, - {0, __pyx_k_copy, sizeof(__pyx_k_copy), 0, 0, 1, 1}, - {0, __pyx_k_copyfile, sizeof(__pyx_k_copyfile), 0, 0, 1, 1}, - {0, __pyx_k_count, sizeof(__pyx_k_count), 0, 0, 1, 1}, - {0, __pyx_k_create_folder, sizeof(__pyx_k_create_folder), 0, 0, 1, 1}, - {0, __pyx_k_cv2, sizeof(__pyx_k_cv2), 0, 0, 1, 1}, - {0, __pyx_k_datasets_coco128, sizeof(__pyx_k_datasets_coco128), 0, 1, 0, 0}, - {0, __pyx_k_datasets_coco128_images, sizeof(__pyx_k_datasets_coco128_images), 0, 1, 0, 0}, - {0, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {0, __pyx_k_disable, sizeof(__pyx_k_disable), 0, 1, 0, 0}, - {0, __pyx_k_dng, sizeof(__pyx_k_dng), 0, 1, 0, 1}, - {0, __pyx_k_doc, sizeof(__pyx_k_doc), 0, 0, 1, 1}, - {0, __pyx_k_does_not_exist, sizeof(__pyx_k_does_not_exist), 0, 1, 0, 0}, - {0, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1}, - {0, __pyx_k_enable, sizeof(__pyx_k_enable), 0, 1, 0, 0}, - {0, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {0, __pyx_k_enter, sizeof(__pyx_k_enter), 0, 0, 1, 1}, - {0, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {0, __pyx_k_exif, sizeof(__pyx_k_exif), 0, 0, 1, 1}, - {0, __pyx_k_exif, sizeof(__pyx_k_exif), 0, 1, 0, 1}, - {0, __pyx_k_exif_size, sizeof(__pyx_k_exif_size), 0, 0, 1, 1}, - {0, __pyx_k_exif_transpose, sizeof(__pyx_k_exif_transpose), 0, 0, 1, 1}, - {0, __pyx_k_exists, sizeof(__pyx_k_exists), 0, 0, 1, 1}, - {0, __pyx_k_exit, sizeof(__pyx_k_exit), 0, 0, 1, 1}, - {0, __pyx_k_extract_boxes, sizeof(__pyx_k_extract_boxes), 0, 0, 1, 1}, - {0, __pyx_k_f, sizeof(__pyx_k_f), 0, 0, 1, 1}, - {0, __pyx_k_file, sizeof(__pyx_k_file), 0, 0, 1, 1}, - {0, __pyx_k_files, sizeof(__pyx_k_files), 0, 0, 1, 1}, - {0, __pyx_k_flat, sizeof(__pyx_k_flat), 0, 1, 0, 1}, - {0, __pyx_k_flatten_recursive, sizeof(__pyx_k_flatten_recursive), 0, 0, 1, 1}, - {0, __pyx_k_float32, sizeof(__pyx_k_float32), 0, 0, 1, 1}, - {0, __pyx_k_frame, sizeof(__pyx_k_frame), 0, 0, 1, 1}, - {0, __pyx_k_frames, sizeof(__pyx_k_frames), 0, 0, 1, 1}, - {0, __pyx_k_full, sizeof(__pyx_k_full), 0, 0, 1, 1}, - {0, __pyx_k_functional, sizeof(__pyx_k_functional), 0, 0, 1, 1}, - {0, __pyx_k_gc, sizeof(__pyx_k_gc), 0, 1, 0, 0}, - {0, __pyx_k_genexpr, sizeof(__pyx_k_genexpr), 0, 0, 1, 1}, - {0, __pyx_k_get, sizeof(__pyx_k_get), 0, 0, 1, 1}, - {0, __pyx_k_get_hash, sizeof(__pyx_k_get_hash), 0, 0, 1, 1}, - {0, __pyx_k_get_hash_locals_genexpr, sizeof(__pyx_k_get_hash_locals_genexpr), 0, 0, 1, 1}, - {0, __pyx_k_getexif, sizeof(__pyx_k_getexif), 0, 0, 1, 1}, - {0, __pyx_k_getexif_2, sizeof(__pyx_k_getexif_2), 0, 0, 1, 1}, - {0, __pyx_k_getsize, sizeof(__pyx_k_getsize), 0, 0, 1, 1}, - {0, __pyx_k_gif, sizeof(__pyx_k_gif), 0, 1, 0, 1}, - {0, __pyx_k_glob, sizeof(__pyx_k_glob), 0, 0, 1, 1}, - {0, __pyx_k_h, sizeof(__pyx_k_h), 0, 0, 1, 1}, - {0, __pyx_k_h0, sizeof(__pyx_k_h0), 0, 0, 1, 1}, - {0, __pyx_k_hashlib, sizeof(__pyx_k_hashlib), 0, 0, 1, 1}, - {0, __pyx_k_hexdigest, sizeof(__pyx_k_hexdigest), 0, 0, 1, 1}, - {0, __pyx_k_hp, sizeof(__pyx_k_hp), 0, 0, 1, 1}, - {0, __pyx_k_https_github_com_ultralytics_yol, sizeof(__pyx_k_https_github_com_ultralytics_yol), 0, 1, 0, 0}, - {0, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1}, - {0, __pyx_k_im, sizeof(__pyx_k_im), 0, 0, 1, 1}, - {0, __pyx_k_im_file, sizeof(__pyx_k_im_file), 0, 0, 1, 1}, - {0, __pyx_k_image, sizeof(__pyx_k_image), 0, 0, 1, 1}, - {0, __pyx_k_image, sizeof(__pyx_k_image), 0, 1, 0, 1}, - {0, __pyx_k_image_2, sizeof(__pyx_k_image_2), 0, 1, 0, 0}, - {0, __pyx_k_images, sizeof(__pyx_k_images), 0, 0, 1, 1}, - {0, __pyx_k_images, sizeof(__pyx_k_images), 0, 1, 0, 1}, - {0, __pyx_k_img, sizeof(__pyx_k_img), 0, 0, 1, 1}, - {0, __pyx_k_img0, sizeof(__pyx_k_img0), 0, 0, 1, 1}, - {0, __pyx_k_img2label_paths, sizeof(__pyx_k_img2label_paths), 0, 0, 1, 1}, - {0, __pyx_k_img4, sizeof(__pyx_k_img4), 0, 0, 1, 1}, - {0, __pyx_k_img9, sizeof(__pyx_k_img9), 0, 0, 1, 1}, - {0, __pyx_k_img_files, sizeof(__pyx_k_img_files), 0, 0, 1, 1}, - {0, __pyx_k_img_hw, sizeof(__pyx_k_img_hw), 0, 0, 1, 1}, - {0, __pyx_k_img_hw0, sizeof(__pyx_k_img_hw0), 0, 0, 1, 1}, - {0, __pyx_k_img_npy, sizeof(__pyx_k_img_npy), 0, 0, 1, 1}, - {0, __pyx_k_img_paths, sizeof(__pyx_k_img_paths), 0, 0, 1, 1}, - {0, __pyx_k_img_size, sizeof(__pyx_k_img_size), 0, 0, 1, 1}, - {0, __pyx_k_imgs, sizeof(__pyx_k_imgs), 0, 0, 1, 1}, - {0, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {0, __pyx_k_imread, sizeof(__pyx_k_imread), 0, 0, 1, 1}, - {0, __pyx_k_imwrite, sizeof(__pyx_k_imwrite), 0, 0, 1, 1}, - {0, __pyx_k_index, sizeof(__pyx_k_index), 0, 0, 1, 1}, - {0, __pyx_k_indices, sizeof(__pyx_k_indices), 0, 0, 1, 1}, - {0, __pyx_k_info, sizeof(__pyx_k_info), 0, 0, 1, 1}, - {0, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1}, - {0, __pyx_k_init_subclass, sizeof(__pyx_k_init_subclass), 0, 0, 1, 1}, - {0, __pyx_k_initializing, sizeof(__pyx_k_initializing), 0, 0, 1, 1}, - {0, __pyx_k_int, sizeof(__pyx_k_int), 0, 0, 1, 1}, - {0, __pyx_k_interpolation, sizeof(__pyx_k_interpolation), 0, 0, 1, 1}, - {0, __pyx_k_is_coroutine, sizeof(__pyx_k_is_coroutine), 0, 0, 1, 1}, - {0, __pyx_k_is_dir, sizeof(__pyx_k_is_dir), 0, 0, 1, 1}, - {0, __pyx_k_isdir, sizeof(__pyx_k_isdir), 0, 0, 1, 1}, - {0, __pyx_k_isenabled, sizeof(__pyx_k_isenabled), 0, 1, 0, 0}, - {0, __pyx_k_isfile, sizeof(__pyx_k_isfile), 0, 0, 1, 1}, - {0, __pyx_k_items, sizeof(__pyx_k_items), 0, 0, 1, 1}, - {0, __pyx_k_iter, sizeof(__pyx_k_iter), 0, 0, 1, 1}, - {0, __pyx_k_itertools, sizeof(__pyx_k_itertools), 0, 0, 1, 1}, - {0, __pyx_k_j, sizeof(__pyx_k_j), 0, 0, 1, 1}, - {0, __pyx_k_join, sizeof(__pyx_k_join), 0, 0, 1, 1}, - {0, __pyx_k_jpeg, sizeof(__pyx_k_jpeg), 0, 1, 0, 1}, - {0, __pyx_k_jpg, sizeof(__pyx_k_jpg), 0, 1, 0, 0}, - {0, __pyx_k_jpg_2, sizeof(__pyx_k_jpg_2), 0, 1, 0, 1}, - {0, __pyx_k_json, sizeof(__pyx_k_json), 0, 0, 1, 1}, - {0, __pyx_k_k, sizeof(__pyx_k_k), 0, 0, 1, 1}, - {0, __pyx_k_keys, sizeof(__pyx_k_keys), 0, 0, 1, 1}, - {0, __pyx_k_labels, sizeof(__pyx_k_labels), 0, 0, 1, 1}, - {0, __pyx_k_labels, sizeof(__pyx_k_labels), 0, 1, 0, 1}, - {0, __pyx_k_labels4, sizeof(__pyx_k_labels4), 0, 0, 1, 1}, - {0, __pyx_k_labels9, sizeof(__pyx_k_labels9), 0, 0, 1, 1}, - {0, __pyx_k_lb, sizeof(__pyx_k_lb), 0, 0, 1, 1}, - {0, __pyx_k_lb_file, sizeof(__pyx_k_lb_file), 0, 0, 1, 1}, - {0, __pyx_k_len, sizeof(__pyx_k_len), 0, 0, 1, 1}, - {0, __pyx_k_letterbox, sizeof(__pyx_k_letterbox), 0, 0, 1, 1}, - {0, __pyx_k_load, sizeof(__pyx_k_load), 0, 0, 1, 1}, - {0, __pyx_k_load_image, sizeof(__pyx_k_load_image), 0, 0, 1, 1}, - {0, __pyx_k_load_mosaic, sizeof(__pyx_k_load_mosaic), 0, 0, 1, 1}, - {0, __pyx_k_load_mosaic9, sizeof(__pyx_k_load_mosaic9), 0, 0, 1, 1}, - {0, __pyx_k_load_mosaic9_locals_genexpr, sizeof(__pyx_k_load_mosaic9_locals_genexpr), 0, 0, 1, 1}, - {0, __pyx_k_load_mosaic_locals_genexpr, sizeof(__pyx_k_load_mosaic_locals_genexpr), 0, 0, 1, 1}, - {0, __pyx_k_lower, sizeof(__pyx_k_lower), 0, 0, 1, 1}, - {0, __pyx_k_m4v, sizeof(__pyx_k_m4v), 0, 1, 0, 1}, - {0, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {0, __pyx_k_makedirs, sizeof(__pyx_k_makedirs), 0, 0, 1, 1}, - {0, __pyx_k_math, sizeof(__pyx_k_math), 0, 0, 1, 1}, - {0, __pyx_k_md5, sizeof(__pyx_k_md5), 0, 0, 1, 1}, - {0, __pyx_k_metaclass, sizeof(__pyx_k_metaclass), 0, 0, 1, 1}, - {0, __pyx_k_method, sizeof(__pyx_k_method), 0, 0, 1, 1}, - {0, __pyx_k_missing_ok, sizeof(__pyx_k_missing_ok), 0, 0, 1, 1}, - {0, __pyx_k_mkdir, sizeof(__pyx_k_mkdir), 0, 0, 1, 1}, - {0, __pyx_k_mkv, sizeof(__pyx_k_mkv), 0, 1, 0, 1}, - {0, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {0, __pyx_k_module, sizeof(__pyx_k_module), 0, 0, 1, 1}, - {0, __pyx_k_mosaic_border, sizeof(__pyx_k_mosaic_border), 0, 0, 1, 1}, - {0, __pyx_k_mov, sizeof(__pyx_k_mov), 0, 1, 0, 1}, - {0, __pyx_k_mp4, sizeof(__pyx_k_mp4), 0, 1, 0, 1}, - {0, __pyx_k_mpeg, sizeof(__pyx_k_mpeg), 0, 1, 0, 1}, - {0, __pyx_k_mpg, sizeof(__pyx_k_mpg), 0, 1, 0, 1}, - {0, __pyx_k_mpo, sizeof(__pyx_k_mpo), 0, 1, 0, 1}, - {0, __pyx_k_multiprocessing_pool, sizeof(__pyx_k_multiprocessing_pool), 0, 0, 1, 1}, - {0, __pyx_k_n, sizeof(__pyx_k_n), 0, 0, 1, 1}, - {0, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {0, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {0, __pyx_k_new, sizeof(__pyx_k_new), 0, 1, 0, 0}, - {0, __pyx_k_new_path, sizeof(__pyx_k_new_path), 0, 0, 1, 1}, - {0, __pyx_k_new_video, sizeof(__pyx_k_new_video), 0, 0, 1, 1}, - {0, __pyx_k_next, sizeof(__pyx_k_next), 0, 0, 1, 1}, - {0, __pyx_k_nf, sizeof(__pyx_k_nf), 0, 0, 1, 1}, - {0, __pyx_k_ni, sizeof(__pyx_k_ni), 0, 0, 1, 1}, - {0, __pyx_k_nn, sizeof(__pyx_k_nn), 0, 0, 1, 1}, - {0, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1}, - {0, __pyx_k_npy, sizeof(__pyx_k_npy), 0, 0, 1, 1}, - {0, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1}, - {0, __pyx_k_nv, sizeof(__pyx_k_nv), 0, 0, 1, 1}, - {0, __pyx_k_open, sizeof(__pyx_k_open), 0, 0, 1, 1}, - {0, __pyx_k_orientation, sizeof(__pyx_k_orientation), 0, 0, 1, 1}, - {0, __pyx_k_os, sizeof(__pyx_k_os), 0, 0, 1, 1}, - {0, __pyx_k_out, sizeof(__pyx_k_out), 0, 0, 1, 1}, - {0, __pyx_k_p, sizeof(__pyx_k_p), 0, 0, 1, 1}, - {0, __pyx_k_padh, sizeof(__pyx_k_padh), 0, 0, 1, 1}, - {0, __pyx_k_padw, sizeof(__pyx_k_padw), 0, 0, 1, 1}, - {0, __pyx_k_padx, sizeof(__pyx_k_padx), 0, 0, 1, 1}, - {0, __pyx_k_pady, sizeof(__pyx_k_pady), 0, 0, 1, 1}, - {0, __pyx_k_parent, sizeof(__pyx_k_parent), 0, 0, 1, 1}, - {0, __pyx_k_parents, sizeof(__pyx_k_parents), 0, 0, 1, 1}, - {0, __pyx_k_path, sizeof(__pyx_k_path), 0, 0, 1, 1}, - {0, __pyx_k_pathlib, sizeof(__pyx_k_pathlib), 0, 0, 1, 1}, - {0, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3), 0, 0, 1, 1}, - {0, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_4, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_4), 0, 0, 1, 0}, - {0, __pyx_k_png, sizeof(__pyx_k_png), 0, 1, 0, 1}, - {0, __pyx_k_prepare, sizeof(__pyx_k_prepare), 0, 0, 1, 1}, - {0, __pyx_k_print, sizeof(__pyx_k_print), 0, 0, 1, 1}, - {0, __pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 0, 1, 1}, - {0, __pyx_k_r, sizeof(__pyx_k_r), 0, 0, 1, 1}, - {0, __pyx_k_random, sizeof(__pyx_k_random), 0, 0, 1, 1}, - {0, __pyx_k_ravel, sizeof(__pyx_k_ravel), 0, 0, 1, 1}, - {0, __pyx_k_read, sizeof(__pyx_k_read), 0, 0, 1, 1}, - {0, __pyx_k_recursive, sizeof(__pyx_k_recursive), 0, 0, 1, 1}, - {0, __pyx_k_relative_to, sizeof(__pyx_k_relative_to), 0, 0, 1, 1}, - {0, __pyx_k_release, sizeof(__pyx_k_release), 0, 0, 1, 1}, - {0, __pyx_k_repeat, sizeof(__pyx_k_repeat), 0, 0, 1, 1}, - {0, __pyx_k_reshape, sizeof(__pyx_k_reshape), 0, 0, 1, 1}, - {0, __pyx_k_resize, sizeof(__pyx_k_resize), 0, 0, 1, 1}, - {0, __pyx_k_resolve, sizeof(__pyx_k_resolve), 0, 0, 1, 1}, - {0, __pyx_k_ret_val, sizeof(__pyx_k_ret_val), 0, 0, 1, 1}, - {0, __pyx_k_rglob, sizeof(__pyx_k_rglob), 0, 0, 1, 1}, - {0, __pyx_k_rmtree, sizeof(__pyx_k_rmtree), 0, 0, 1, 1}, - {0, __pyx_k_rotation, sizeof(__pyx_k_rotation), 0, 0, 1, 1}, - {0, __pyx_k_rsplit, sizeof(__pyx_k_rsplit), 0, 0, 1, 1}, - {0, __pyx_k_s, sizeof(__pyx_k_s), 0, 0, 1, 1}, - {0, __pyx_k_sa, sizeof(__pyx_k_sa), 0, 0, 1, 1}, - {0, __pyx_k_sb, sizeof(__pyx_k_sb), 0, 0, 1, 1}, - {0, __pyx_k_seed, sizeof(__pyx_k_seed), 0, 0, 1, 1}, - {0, __pyx_k_segments, sizeof(__pyx_k_segments), 0, 0, 1, 1}, - {0, __pyx_k_segments4, sizeof(__pyx_k_segments4), 0, 0, 1, 1}, - {0, __pyx_k_segments9, sizeof(__pyx_k_segments9), 0, 0, 1, 1}, - {0, __pyx_k_self, sizeof(__pyx_k_self), 0, 0, 1, 1}, - {0, __pyx_k_send, sizeof(__pyx_k_send), 0, 0, 1, 1}, - {0, __pyx_k_sep, sizeof(__pyx_k_sep), 0, 0, 1, 1}, - {0, __pyx_k_set_name, sizeof(__pyx_k_set_name), 0, 0, 1, 1}, - {0, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {0, __pyx_k_shuffle, sizeof(__pyx_k_shuffle), 0, 0, 1, 1}, - {0, __pyx_k_shutil, sizeof(__pyx_k_shutil), 0, 0, 1, 1}, - {0, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {0, __pyx_k_spec, sizeof(__pyx_k_spec), 0, 0, 1, 1}, - {0, __pyx_k_split, sizeof(__pyx_k_split), 0, 0, 1, 1}, - {0, __pyx_k_splitlines, sizeof(__pyx_k_splitlines), 0, 0, 1, 1}, - {0, __pyx_k_stem, sizeof(__pyx_k_stem), 0, 0, 1, 1}, - {0, __pyx_k_stride, sizeof(__pyx_k_stride), 0, 0, 1, 1}, - {0, __pyx_k_strip, sizeof(__pyx_k_strip), 0, 0, 1, 1}, - {0, __pyx_k_suffix, sizeof(__pyx_k_suffix), 0, 0, 1, 1}, - {0, __pyx_k_sum, sizeof(__pyx_k_sum), 0, 0, 1, 1}, - {0, __pyx_k_super, sizeof(__pyx_k_super), 0, 0, 1, 1}, - {0, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {0, __pyx_k_threading, sizeof(__pyx_k_threading), 0, 0, 1, 1}, - {0, __pyx_k_throw, sizeof(__pyx_k_throw), 0, 0, 1, 1}, - {0, __pyx_k_tif, sizeof(__pyx_k_tif), 0, 1, 0, 1}, - {0, __pyx_k_tiff, sizeof(__pyx_k_tiff), 0, 1, 0, 1}, - {0, __pyx_k_time, sizeof(__pyx_k_time), 0, 0, 1, 1}, - {0, __pyx_k_tobytes, sizeof(__pyx_k_tobytes), 0, 0, 1, 1}, - {0, __pyx_k_torch, sizeof(__pyx_k_torch), 0, 0, 1, 1}, - {0, __pyx_k_torch_nn_functional, sizeof(__pyx_k_torch_nn_functional), 0, 0, 1, 1}, - {0, __pyx_k_total, sizeof(__pyx_k_total), 0, 0, 1, 1}, - {0, __pyx_k_tqdm, sizeof(__pyx_k_tqdm), 0, 0, 1, 1}, - {0, __pyx_k_transpose, sizeof(__pyx_k_transpose), 0, 0, 1, 1}, - {0, __pyx_k_txt, sizeof(__pyx_k_txt), 0, 1, 0, 0}, - {0, __pyx_k_txt_2, sizeof(__pyx_k_txt_2), 0, 0, 1, 1}, - {0, __pyx_k_uint8, sizeof(__pyx_k_uint8), 0, 0, 1, 1}, - {0, __pyx_k_uniform, sizeof(__pyx_k_uniform), 0, 0, 1, 1}, - {0, __pyx_k_unlink, sizeof(__pyx_k_unlink), 0, 0, 1, 1}, - {0, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {0, __pyx_k_using_txt_labeled_images_only, sizeof(__pyx_k_using_txt_labeled_images_only), 0, 1, 0, 0}, - {0, __pyx_k_video, sizeof(__pyx_k_video), 0, 1, 0, 1}, - {0, __pyx_k_video_2, sizeof(__pyx_k_video_2), 0, 1, 0, 0}, - {0, __pyx_k_video_flag, sizeof(__pyx_k_video_flag), 0, 0, 1, 1}, - {0, __pyx_k_videos, sizeof(__pyx_k_videos), 0, 1, 0, 0}, - {0, __pyx_k_videos_2, sizeof(__pyx_k_videos_2), 0, 0, 1, 1}, - {0, __pyx_k_w, sizeof(__pyx_k_w), 0, 0, 1, 1}, - {0, __pyx_k_w0, sizeof(__pyx_k_w0), 0, 0, 1, 1}, - {0, __pyx_k_webp, sizeof(__pyx_k_webp), 0, 1, 0, 1}, - {0, __pyx_k_weights, sizeof(__pyx_k_weights), 0, 0, 1, 1}, - {0, __pyx_k_wmv, sizeof(__pyx_k_wmv), 0, 1, 0, 1}, - {0, __pyx_k_wp, sizeof(__pyx_k_wp), 0, 0, 1, 1}, - {0, __pyx_k_write, sizeof(__pyx_k_write), 0, 0, 1, 1}, - {0, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1}, - {0, __pyx_k_x1, sizeof(__pyx_k_x1), 0, 0, 1, 1}, - {0, __pyx_k_x1a, sizeof(__pyx_k_x1a), 0, 0, 1, 1}, - {0, __pyx_k_x1b, sizeof(__pyx_k_x1b), 0, 0, 1, 1}, - {0, __pyx_k_x2, sizeof(__pyx_k_x2), 0, 0, 1, 1}, - {0, __pyx_k_x2a, sizeof(__pyx_k_x2a), 0, 0, 1, 1}, - {0, __pyx_k_x2b, sizeof(__pyx_k_x2b), 0, 0, 1, 1}, - {0, __pyx_k_xc, sizeof(__pyx_k_xc), 0, 0, 1, 1}, - {0, __pyx_k_xyn2xy, sizeof(__pyx_k_xyn2xy), 0, 0, 1, 1}, - {0, __pyx_k_xywh2xyxy, sizeof(__pyx_k_xywh2xyxy), 0, 0, 1, 1}, - {0, __pyx_k_xywhn2xyxy, sizeof(__pyx_k_xywhn2xyxy), 0, 0, 1, 1}, - {0, __pyx_k_y1, sizeof(__pyx_k_y1), 0, 0, 1, 1}, - {0, __pyx_k_y1a, sizeof(__pyx_k_y1a), 0, 0, 1, 1}, - {0, __pyx_k_y1b, sizeof(__pyx_k_y1b), 0, 0, 1, 1}, - {0, __pyx_k_y2, sizeof(__pyx_k_y2), 0, 0, 1, 1}, - {0, __pyx_k_y2a, sizeof(__pyx_k_y2a), 0, 0, 1, 1}, - {0, __pyx_k_y2b, sizeof(__pyx_k_y2b), 0, 0, 1, 1}, - {0, __pyx_k_yaml, sizeof(__pyx_k_yaml), 0, 0, 1, 1}, - {0, __pyx_k_yc, sizeof(__pyx_k_yc), 0, 0, 1, 1}, - {0, __pyx_k_zip, sizeof(__pyx_k_zip), 0, 0, 1, 1}, - {0, __pyx_k_zipfile, sizeof(__pyx_k_zipfile), 0, 0, 1, 1}, - #else - {&__pyx_kp_u_, __pyx_k_, sizeof(__pyx_k_), 0, 1, 0, 0}, - {&__pyx_n_s_AssertionError, __pyx_k_AssertionError, sizeof(__pyx_k_AssertionError), 0, 0, 1, 1}, - {&__pyx_kp_u_Autosplitting_images_from, __pyx_k_Autosplitting_images_from, sizeof(__pyx_k_Autosplitting_images_from), 0, 1, 0, 0}, - {&__pyx_n_s_CAP_PROP_FRAME_COUNT, __pyx_k_CAP_PROP_FRAME_COUNT, sizeof(__pyx_k_CAP_PROP_FRAME_COUNT), 0, 0, 1, 1}, - {&__pyx_kp_u_ERROR, __pyx_k_ERROR, sizeof(__pyx_k_ERROR), 0, 1, 0, 0}, - {&__pyx_n_s_ExifTags, __pyx_k_ExifTags, sizeof(__pyx_k_ExifTags), 0, 0, 1, 1}, - {&__pyx_n_s_F, __pyx_k_F, sizeof(__pyx_k_F), 0, 0, 1, 1}, - {&__pyx_n_s_FLIP_LEFT_RIGHT, __pyx_k_FLIP_LEFT_RIGHT, sizeof(__pyx_k_FLIP_LEFT_RIGHT), 0, 0, 1, 1}, - {&__pyx_n_s_FLIP_TOP_BOTTOM, __pyx_k_FLIP_TOP_BOTTOM, sizeof(__pyx_k_FLIP_TOP_BOTTOM), 0, 0, 1, 1}, - {&__pyx_n_s_HELP_URL, __pyx_k_HELP_URL, sizeof(__pyx_k_HELP_URL), 0, 0, 1, 1}, - {&__pyx_n_s_IMG_FORMATS, __pyx_k_IMG_FORMATS, sizeof(__pyx_k_IMG_FORMATS), 0, 0, 1, 1}, - {&__pyx_n_s_INTER_AREA, __pyx_k_INTER_AREA, sizeof(__pyx_k_INTER_AREA), 0, 0, 1, 1}, - {&__pyx_n_s_INTER_LINEAR, __pyx_k_INTER_LINEAR, sizeof(__pyx_k_INTER_LINEAR), 0, 0, 1, 1}, - {&__pyx_n_s_Image, __pyx_k_Image, sizeof(__pyx_k_Image), 0, 0, 1, 1}, - {&__pyx_n_s_ImageOps, __pyx_k_ImageOps, sizeof(__pyx_k_ImageOps), 0, 0, 1, 1}, - {&__pyx_kp_u_Image_Not_Found, __pyx_k_Image_Not_Found, sizeof(__pyx_k_Image_Not_Found), 0, 1, 0, 0}, - {&__pyx_n_s_LoadImages, __pyx_k_LoadImages, sizeof(__pyx_k_LoadImages), 0, 0, 1, 1}, - {&__pyx_n_s_LoadImages___init, __pyx_k_LoadImages___init, sizeof(__pyx_k_LoadImages___init), 0, 0, 1, 1}, - {&__pyx_n_s_LoadImages___iter, __pyx_k_LoadImages___iter, sizeof(__pyx_k_LoadImages___iter), 0, 0, 1, 1}, - {&__pyx_n_s_LoadImages___len, __pyx_k_LoadImages___len, sizeof(__pyx_k_LoadImages___len), 0, 0, 1, 1}, - {&__pyx_n_s_LoadImages___next, __pyx_k_LoadImages___next, sizeof(__pyx_k_LoadImages___next), 0, 0, 1, 1}, - {&__pyx_n_s_LoadImages_new_video, __pyx_k_LoadImages_new_video, sizeof(__pyx_k_LoadImages_new_video), 0, 0, 1, 1}, - {&__pyx_kp_u_No_images_or_videos_found_in, __pyx_k_No_images_or_videos_found_in, sizeof(__pyx_k_No_images_or_videos_found_in), 0, 1, 0, 0}, - {&__pyx_n_u_Orientation, __pyx_k_Orientation, sizeof(__pyx_k_Orientation), 0, 1, 0, 1}, - {&__pyx_n_s_PIL, __pyx_k_PIL, sizeof(__pyx_k_PIL), 0, 0, 1, 1}, - {&__pyx_n_s_Path, __pyx_k_Path, sizeof(__pyx_k_Path), 0, 0, 1, 1}, - {&__pyx_n_s_Pool, __pyx_k_Pool, sizeof(__pyx_k_Pool), 0, 0, 1, 1}, - {&__pyx_n_s_ROTATE_180, __pyx_k_ROTATE_180, sizeof(__pyx_k_ROTATE_180), 0, 0, 1, 1}, - {&__pyx_n_s_ROTATE_270, __pyx_k_ROTATE_270, sizeof(__pyx_k_ROTATE_270), 0, 0, 1, 1}, - {&__pyx_n_s_ROTATE_90, __pyx_k_ROTATE_90, sizeof(__pyx_k_ROTATE_90), 0, 0, 1, 1}, - {&__pyx_n_s_StopIteration, __pyx_k_StopIteration, sizeof(__pyx_k_StopIteration), 0, 0, 1, 1}, - {&__pyx_kp_u_Supported_formats_are_images, __pyx_k_Supported_formats_are_images, sizeof(__pyx_k_Supported_formats_are_images), 0, 1, 0, 0}, - {&__pyx_n_s_TAGS, __pyx_k_TAGS, sizeof(__pyx_k_TAGS), 0, 0, 1, 1}, - {&__pyx_n_s_TRANSPOSE, __pyx_k_TRANSPOSE, sizeof(__pyx_k_TRANSPOSE), 0, 0, 1, 1}, - {&__pyx_n_s_TRANSVERSE, __pyx_k_TRANSVERSE, sizeof(__pyx_k_TRANSVERSE), 0, 0, 1, 1}, - {&__pyx_n_s_Thread, __pyx_k_Thread, sizeof(__pyx_k_Thread), 0, 0, 1, 1}, - {&__pyx_n_s_ThreadPool, __pyx_k_ThreadPool, sizeof(__pyx_k_ThreadPool), 0, 0, 1, 1}, - {&__pyx_n_s_VID_FORMATS, __pyx_k_VID_FORMATS, sizeof(__pyx_k_VID_FORMATS), 0, 0, 1, 1}, - {&__pyx_n_s_VideoCapture, __pyx_k_VideoCapture, sizeof(__pyx_k_VideoCapture), 0, 0, 1, 1}, - {&__pyx_n_s_ZipFile, __pyx_k_ZipFile, sizeof(__pyx_k_ZipFile), 0, 0, 1, 1}, - {&__pyx_kp_u__10, __pyx_k__10, sizeof(__pyx_k__10), 0, 1, 0, 0}, - {&__pyx_kp_u__18, __pyx_k__18, sizeof(__pyx_k__18), 0, 1, 0, 0}, - {&__pyx_n_s__21, __pyx_k__21, sizeof(__pyx_k__21), 0, 0, 1, 1}, - {&__pyx_n_u__21, __pyx_k__21, sizeof(__pyx_k__21), 0, 1, 0, 1}, - {&__pyx_kp_u__25, __pyx_k__25, sizeof(__pyx_k__25), 0, 1, 0, 0}, - {&__pyx_kp_u__26, __pyx_k__26, sizeof(__pyx_k__26), 0, 1, 0, 0}, - {&__pyx_n_s__3, __pyx_k__3, sizeof(__pyx_k__3), 0, 0, 1, 1}, - {&__pyx_kp_u__3, __pyx_k__3, sizeof(__pyx_k__3), 0, 1, 0, 0}, - {&__pyx_kp_u__4, __pyx_k__4, sizeof(__pyx_k__4), 0, 1, 0, 0}, - {&__pyx_kp_u__5, __pyx_k__5, sizeof(__pyx_k__5), 0, 1, 0, 0}, - {&__pyx_kp_u__6, __pyx_k__6, sizeof(__pyx_k__6), 0, 1, 0, 0}, - {&__pyx_n_s__64, __pyx_k__64, sizeof(__pyx_k__64), 0, 0, 1, 1}, - {&__pyx_kp_u__7, __pyx_k__7, sizeof(__pyx_k__7), 0, 1, 0, 0}, - {&__pyx_kp_u__8, __pyx_k__8, sizeof(__pyx_k__8), 0, 1, 0, 0}, - {&__pyx_kp_u__9, __pyx_k__9, sizeof(__pyx_k__9), 0, 1, 0, 0}, - {&__pyx_n_u_a, __pyx_k_a, sizeof(__pyx_k_a), 0, 1, 0, 1}, - {&__pyx_n_s_annotated_only, __pyx_k_annotated_only, sizeof(__pyx_k_annotated_only), 0, 0, 1, 1}, - {&__pyx_n_s_any, __pyx_k_any, sizeof(__pyx_k_any), 0, 0, 1, 1}, - {&__pyx_n_s_append, __pyx_k_append, sizeof(__pyx_k_append), 0, 0, 1, 1}, - {&__pyx_n_s_args, __pyx_k_args, sizeof(__pyx_k_args), 0, 0, 1, 1}, - {&__pyx_n_s_array, __pyx_k_array, sizeof(__pyx_k_array), 0, 0, 1, 1}, - {&__pyx_n_s_as_posix, __pyx_k_as_posix, sizeof(__pyx_k_as_posix), 0, 0, 1, 1}, - {&__pyx_n_s_ascontiguousarray, __pyx_k_ascontiguousarray, sizeof(__pyx_k_ascontiguousarray), 0, 0, 1, 1}, - {&__pyx_n_u_asf, __pyx_k_asf, sizeof(__pyx_k_asf), 0, 1, 0, 1}, - {&__pyx_n_s_astype, __pyx_k_astype, sizeof(__pyx_k_astype), 0, 0, 1, 1}, - {&__pyx_n_s_asyncio_coroutines, __pyx_k_asyncio_coroutines, sizeof(__pyx_k_asyncio_coroutines), 0, 0, 1, 1}, - {&__pyx_n_s_augment, __pyx_k_augment, sizeof(__pyx_k_augment), 0, 0, 1, 1}, - {&__pyx_n_s_auto, __pyx_k_auto, sizeof(__pyx_k_auto), 0, 0, 1, 1}, - {&__pyx_n_s_autosplit, __pyx_k_autosplit, sizeof(__pyx_k_autosplit), 0, 0, 1, 1}, - {&__pyx_n_s_autosplit_locals_genexpr, __pyx_k_autosplit_locals_genexpr, sizeof(__pyx_k_autosplit_locals_genexpr), 0, 0, 1, 1}, - {&__pyx_kp_u_autosplit_test_txt, __pyx_k_autosplit_test_txt, sizeof(__pyx_k_autosplit_test_txt), 0, 1, 0, 0}, - {&__pyx_kp_u_autosplit_train_txt, __pyx_k_autosplit_train_txt, sizeof(__pyx_k_autosplit_train_txt), 0, 1, 0, 0}, - {&__pyx_kp_u_autosplit_val_txt, __pyx_k_autosplit_val_txt, sizeof(__pyx_k_autosplit_val_txt), 0, 1, 0, 0}, - {&__pyx_n_u_avi, __pyx_k_avi, sizeof(__pyx_k_avi), 0, 1, 0, 1}, - {&__pyx_n_s_b, __pyx_k_b, sizeof(__pyx_k_b), 0, 0, 1, 1}, - {&__pyx_n_u_bmp, __pyx_k_bmp, sizeof(__pyx_k_bmp), 0, 1, 0, 1}, - {&__pyx_kp_u_box_failure_in, __pyx_k_box_failure_in, sizeof(__pyx_k_box_failure_in), 0, 1, 0, 0}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_s_cap, __pyx_k_cap, sizeof(__pyx_k_cap), 0, 0, 1, 1}, - {&__pyx_n_s_choices, __pyx_k_choices, sizeof(__pyx_k_choices), 0, 0, 1, 1}, - {&__pyx_n_s_class_getitem, __pyx_k_class_getitem, sizeof(__pyx_k_class_getitem), 0, 0, 1, 1}, - {&__pyx_n_u_classifier, __pyx_k_classifier, sizeof(__pyx_k_classifier), 0, 1, 0, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_n_s_clip, __pyx_k_clip, sizeof(__pyx_k_clip), 0, 0, 1, 1}, - {&__pyx_n_s_close, __pyx_k_close, sizeof(__pyx_k_close), 0, 0, 1, 1}, - {&__pyx_n_s_concatenate, __pyx_k_concatenate, sizeof(__pyx_k_concatenate), 0, 0, 1, 1}, - {&__pyx_n_s_copy, __pyx_k_copy, sizeof(__pyx_k_copy), 0, 0, 1, 1}, - {&__pyx_n_s_copyfile, __pyx_k_copyfile, sizeof(__pyx_k_copyfile), 0, 0, 1, 1}, - {&__pyx_n_s_count, __pyx_k_count, sizeof(__pyx_k_count), 0, 0, 1, 1}, - {&__pyx_n_s_create_folder, __pyx_k_create_folder, sizeof(__pyx_k_create_folder), 0, 0, 1, 1}, - {&__pyx_n_s_cv2, __pyx_k_cv2, sizeof(__pyx_k_cv2), 0, 0, 1, 1}, - {&__pyx_kp_u_datasets_coco128, __pyx_k_datasets_coco128, sizeof(__pyx_k_datasets_coco128), 0, 1, 0, 0}, - {&__pyx_kp_u_datasets_coco128_images, __pyx_k_datasets_coco128_images, sizeof(__pyx_k_datasets_coco128_images), 0, 1, 0, 0}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_kp_u_disable, __pyx_k_disable, sizeof(__pyx_k_disable), 0, 1, 0, 0}, - {&__pyx_n_u_dng, __pyx_k_dng, sizeof(__pyx_k_dng), 0, 1, 0, 1}, - {&__pyx_n_s_doc, __pyx_k_doc, sizeof(__pyx_k_doc), 0, 0, 1, 1}, - {&__pyx_kp_u_does_not_exist, __pyx_k_does_not_exist, sizeof(__pyx_k_does_not_exist), 0, 1, 0, 0}, - {&__pyx_n_s_dtype, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1}, - {&__pyx_kp_u_enable, __pyx_k_enable, sizeof(__pyx_k_enable), 0, 1, 0, 0}, - {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {&__pyx_n_s_enter, __pyx_k_enter, sizeof(__pyx_k_enter), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_exif, __pyx_k_exif, sizeof(__pyx_k_exif), 0, 0, 1, 1}, - {&__pyx_n_u_exif, __pyx_k_exif, sizeof(__pyx_k_exif), 0, 1, 0, 1}, - {&__pyx_n_s_exif_size, __pyx_k_exif_size, sizeof(__pyx_k_exif_size), 0, 0, 1, 1}, - {&__pyx_n_s_exif_transpose, __pyx_k_exif_transpose, sizeof(__pyx_k_exif_transpose), 0, 0, 1, 1}, - {&__pyx_n_s_exists, __pyx_k_exists, sizeof(__pyx_k_exists), 0, 0, 1, 1}, - {&__pyx_n_s_exit, __pyx_k_exit, sizeof(__pyx_k_exit), 0, 0, 1, 1}, - {&__pyx_n_s_extract_boxes, __pyx_k_extract_boxes, sizeof(__pyx_k_extract_boxes), 0, 0, 1, 1}, - {&__pyx_n_s_f, __pyx_k_f, sizeof(__pyx_k_f), 0, 0, 1, 1}, - {&__pyx_n_s_file, __pyx_k_file, sizeof(__pyx_k_file), 0, 0, 1, 1}, - {&__pyx_n_s_files, __pyx_k_files, sizeof(__pyx_k_files), 0, 0, 1, 1}, - {&__pyx_n_u_flat, __pyx_k_flat, sizeof(__pyx_k_flat), 0, 1, 0, 1}, - {&__pyx_n_s_flatten_recursive, __pyx_k_flatten_recursive, sizeof(__pyx_k_flatten_recursive), 0, 0, 1, 1}, - {&__pyx_n_s_float32, __pyx_k_float32, sizeof(__pyx_k_float32), 0, 0, 1, 1}, - {&__pyx_n_s_frame, __pyx_k_frame, sizeof(__pyx_k_frame), 0, 0, 1, 1}, - {&__pyx_n_s_frames, __pyx_k_frames, sizeof(__pyx_k_frames), 0, 0, 1, 1}, - {&__pyx_n_s_full, __pyx_k_full, sizeof(__pyx_k_full), 0, 0, 1, 1}, - {&__pyx_n_s_functional, __pyx_k_functional, sizeof(__pyx_k_functional), 0, 0, 1, 1}, - {&__pyx_kp_u_gc, __pyx_k_gc, sizeof(__pyx_k_gc), 0, 1, 0, 0}, - {&__pyx_n_s_genexpr, __pyx_k_genexpr, sizeof(__pyx_k_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_get, __pyx_k_get, sizeof(__pyx_k_get), 0, 0, 1, 1}, - {&__pyx_n_s_get_hash, __pyx_k_get_hash, sizeof(__pyx_k_get_hash), 0, 0, 1, 1}, - {&__pyx_n_s_get_hash_locals_genexpr, __pyx_k_get_hash_locals_genexpr, sizeof(__pyx_k_get_hash_locals_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_getexif, __pyx_k_getexif, sizeof(__pyx_k_getexif), 0, 0, 1, 1}, - {&__pyx_n_s_getexif_2, __pyx_k_getexif_2, sizeof(__pyx_k_getexif_2), 0, 0, 1, 1}, - {&__pyx_n_s_getsize, __pyx_k_getsize, sizeof(__pyx_k_getsize), 0, 0, 1, 1}, - {&__pyx_n_u_gif, __pyx_k_gif, sizeof(__pyx_k_gif), 0, 1, 0, 1}, - {&__pyx_n_s_glob, __pyx_k_glob, sizeof(__pyx_k_glob), 0, 0, 1, 1}, - {&__pyx_n_s_h, __pyx_k_h, sizeof(__pyx_k_h), 0, 0, 1, 1}, - {&__pyx_n_s_h0, __pyx_k_h0, sizeof(__pyx_k_h0), 0, 0, 1, 1}, - {&__pyx_n_s_hashlib, __pyx_k_hashlib, sizeof(__pyx_k_hashlib), 0, 0, 1, 1}, - {&__pyx_n_s_hexdigest, __pyx_k_hexdigest, sizeof(__pyx_k_hexdigest), 0, 0, 1, 1}, - {&__pyx_n_s_hp, __pyx_k_hp, sizeof(__pyx_k_hp), 0, 0, 1, 1}, - {&__pyx_kp_u_https_github_com_ultralytics_yol, __pyx_k_https_github_com_ultralytics_yol, sizeof(__pyx_k_https_github_com_ultralytics_yol), 0, 1, 0, 0}, - {&__pyx_n_s_i, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1}, - {&__pyx_n_s_im, __pyx_k_im, sizeof(__pyx_k_im), 0, 0, 1, 1}, - {&__pyx_n_s_im_file, __pyx_k_im_file, sizeof(__pyx_k_im_file), 0, 0, 1, 1}, - {&__pyx_n_s_image, __pyx_k_image, sizeof(__pyx_k_image), 0, 0, 1, 1}, - {&__pyx_n_u_image, __pyx_k_image, sizeof(__pyx_k_image), 0, 1, 0, 1}, - {&__pyx_kp_u_image_2, __pyx_k_image_2, sizeof(__pyx_k_image_2), 0, 1, 0, 0}, - {&__pyx_n_s_images, __pyx_k_images, sizeof(__pyx_k_images), 0, 0, 1, 1}, - {&__pyx_n_u_images, __pyx_k_images, sizeof(__pyx_k_images), 0, 1, 0, 1}, - {&__pyx_n_s_img, __pyx_k_img, sizeof(__pyx_k_img), 0, 0, 1, 1}, - {&__pyx_n_s_img0, __pyx_k_img0, sizeof(__pyx_k_img0), 0, 0, 1, 1}, - {&__pyx_n_s_img2label_paths, __pyx_k_img2label_paths, sizeof(__pyx_k_img2label_paths), 0, 0, 1, 1}, - {&__pyx_n_s_img4, __pyx_k_img4, sizeof(__pyx_k_img4), 0, 0, 1, 1}, - {&__pyx_n_s_img9, __pyx_k_img9, sizeof(__pyx_k_img9), 0, 0, 1, 1}, - {&__pyx_n_s_img_files, __pyx_k_img_files, sizeof(__pyx_k_img_files), 0, 0, 1, 1}, - {&__pyx_n_s_img_hw, __pyx_k_img_hw, sizeof(__pyx_k_img_hw), 0, 0, 1, 1}, - {&__pyx_n_s_img_hw0, __pyx_k_img_hw0, sizeof(__pyx_k_img_hw0), 0, 0, 1, 1}, - {&__pyx_n_s_img_npy, __pyx_k_img_npy, sizeof(__pyx_k_img_npy), 0, 0, 1, 1}, - {&__pyx_n_s_img_paths, __pyx_k_img_paths, sizeof(__pyx_k_img_paths), 0, 0, 1, 1}, - {&__pyx_n_s_img_size, __pyx_k_img_size, sizeof(__pyx_k_img_size), 0, 0, 1, 1}, - {&__pyx_n_s_imgs, __pyx_k_imgs, sizeof(__pyx_k_imgs), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_imread, __pyx_k_imread, sizeof(__pyx_k_imread), 0, 0, 1, 1}, - {&__pyx_n_s_imwrite, __pyx_k_imwrite, sizeof(__pyx_k_imwrite), 0, 0, 1, 1}, - {&__pyx_n_s_index, __pyx_k_index, sizeof(__pyx_k_index), 0, 0, 1, 1}, - {&__pyx_n_s_indices, __pyx_k_indices, sizeof(__pyx_k_indices), 0, 0, 1, 1}, - {&__pyx_n_s_info, __pyx_k_info, sizeof(__pyx_k_info), 0, 0, 1, 1}, - {&__pyx_n_s_init, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1}, - {&__pyx_n_s_init_subclass, __pyx_k_init_subclass, sizeof(__pyx_k_init_subclass), 0, 0, 1, 1}, - {&__pyx_n_s_initializing, __pyx_k_initializing, sizeof(__pyx_k_initializing), 0, 0, 1, 1}, - {&__pyx_n_s_int, __pyx_k_int, sizeof(__pyx_k_int), 0, 0, 1, 1}, - {&__pyx_n_s_interpolation, __pyx_k_interpolation, sizeof(__pyx_k_interpolation), 0, 0, 1, 1}, - {&__pyx_n_s_is_coroutine, __pyx_k_is_coroutine, sizeof(__pyx_k_is_coroutine), 0, 0, 1, 1}, - {&__pyx_n_s_is_dir, __pyx_k_is_dir, sizeof(__pyx_k_is_dir), 0, 0, 1, 1}, - {&__pyx_n_s_isdir, __pyx_k_isdir, sizeof(__pyx_k_isdir), 0, 0, 1, 1}, - {&__pyx_kp_u_isenabled, __pyx_k_isenabled, sizeof(__pyx_k_isenabled), 0, 1, 0, 0}, - {&__pyx_n_s_isfile, __pyx_k_isfile, sizeof(__pyx_k_isfile), 0, 0, 1, 1}, - {&__pyx_n_s_items, __pyx_k_items, sizeof(__pyx_k_items), 0, 0, 1, 1}, - {&__pyx_n_s_iter, __pyx_k_iter, sizeof(__pyx_k_iter), 0, 0, 1, 1}, - {&__pyx_n_s_itertools, __pyx_k_itertools, sizeof(__pyx_k_itertools), 0, 0, 1, 1}, - {&__pyx_n_s_j, __pyx_k_j, sizeof(__pyx_k_j), 0, 0, 1, 1}, - {&__pyx_n_s_join, __pyx_k_join, sizeof(__pyx_k_join), 0, 0, 1, 1}, - {&__pyx_n_u_jpeg, __pyx_k_jpeg, sizeof(__pyx_k_jpeg), 0, 1, 0, 1}, - {&__pyx_kp_u_jpg, __pyx_k_jpg, sizeof(__pyx_k_jpg), 0, 1, 0, 0}, - {&__pyx_n_u_jpg_2, __pyx_k_jpg_2, sizeof(__pyx_k_jpg_2), 0, 1, 0, 1}, - {&__pyx_n_s_json, __pyx_k_json, sizeof(__pyx_k_json), 0, 0, 1, 1}, - {&__pyx_n_s_k, __pyx_k_k, sizeof(__pyx_k_k), 0, 0, 1, 1}, - {&__pyx_n_s_keys, __pyx_k_keys, sizeof(__pyx_k_keys), 0, 0, 1, 1}, - {&__pyx_n_s_labels, __pyx_k_labels, sizeof(__pyx_k_labels), 0, 0, 1, 1}, - {&__pyx_n_u_labels, __pyx_k_labels, sizeof(__pyx_k_labels), 0, 1, 0, 1}, - {&__pyx_n_s_labels4, __pyx_k_labels4, sizeof(__pyx_k_labels4), 0, 0, 1, 1}, - {&__pyx_n_s_labels9, __pyx_k_labels9, sizeof(__pyx_k_labels9), 0, 0, 1, 1}, - {&__pyx_n_s_lb, __pyx_k_lb, sizeof(__pyx_k_lb), 0, 0, 1, 1}, - {&__pyx_n_s_lb_file, __pyx_k_lb_file, sizeof(__pyx_k_lb_file), 0, 0, 1, 1}, - {&__pyx_n_s_len, __pyx_k_len, sizeof(__pyx_k_len), 0, 0, 1, 1}, - {&__pyx_n_s_letterbox, __pyx_k_letterbox, sizeof(__pyx_k_letterbox), 0, 0, 1, 1}, - {&__pyx_n_s_load, __pyx_k_load, sizeof(__pyx_k_load), 0, 0, 1, 1}, - {&__pyx_n_s_load_image, __pyx_k_load_image, sizeof(__pyx_k_load_image), 0, 0, 1, 1}, - {&__pyx_n_s_load_mosaic, __pyx_k_load_mosaic, sizeof(__pyx_k_load_mosaic), 0, 0, 1, 1}, - {&__pyx_n_s_load_mosaic9, __pyx_k_load_mosaic9, sizeof(__pyx_k_load_mosaic9), 0, 0, 1, 1}, - {&__pyx_n_s_load_mosaic9_locals_genexpr, __pyx_k_load_mosaic9_locals_genexpr, sizeof(__pyx_k_load_mosaic9_locals_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_load_mosaic_locals_genexpr, __pyx_k_load_mosaic_locals_genexpr, sizeof(__pyx_k_load_mosaic_locals_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_lower, __pyx_k_lower, sizeof(__pyx_k_lower), 0, 0, 1, 1}, - {&__pyx_n_u_m4v, __pyx_k_m4v, sizeof(__pyx_k_m4v), 0, 1, 0, 1}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_makedirs, __pyx_k_makedirs, sizeof(__pyx_k_makedirs), 0, 0, 1, 1}, - {&__pyx_n_s_math, __pyx_k_math, sizeof(__pyx_k_math), 0, 0, 1, 1}, - {&__pyx_n_s_md5, __pyx_k_md5, sizeof(__pyx_k_md5), 0, 0, 1, 1}, - {&__pyx_n_s_metaclass, __pyx_k_metaclass, sizeof(__pyx_k_metaclass), 0, 0, 1, 1}, - {&__pyx_n_s_method, __pyx_k_method, sizeof(__pyx_k_method), 0, 0, 1, 1}, - {&__pyx_n_s_missing_ok, __pyx_k_missing_ok, sizeof(__pyx_k_missing_ok), 0, 0, 1, 1}, - {&__pyx_n_s_mkdir, __pyx_k_mkdir, sizeof(__pyx_k_mkdir), 0, 0, 1, 1}, - {&__pyx_n_u_mkv, __pyx_k_mkv, sizeof(__pyx_k_mkv), 0, 1, 0, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_module, __pyx_k_module, sizeof(__pyx_k_module), 0, 0, 1, 1}, - {&__pyx_n_s_mosaic_border, __pyx_k_mosaic_border, sizeof(__pyx_k_mosaic_border), 0, 0, 1, 1}, - {&__pyx_n_u_mov, __pyx_k_mov, sizeof(__pyx_k_mov), 0, 1, 0, 1}, - {&__pyx_n_u_mp4, __pyx_k_mp4, sizeof(__pyx_k_mp4), 0, 1, 0, 1}, - {&__pyx_n_u_mpeg, __pyx_k_mpeg, sizeof(__pyx_k_mpeg), 0, 1, 0, 1}, - {&__pyx_n_u_mpg, __pyx_k_mpg, sizeof(__pyx_k_mpg), 0, 1, 0, 1}, - {&__pyx_n_u_mpo, __pyx_k_mpo, sizeof(__pyx_k_mpo), 0, 1, 0, 1}, - {&__pyx_n_s_multiprocessing_pool, __pyx_k_multiprocessing_pool, sizeof(__pyx_k_multiprocessing_pool), 0, 0, 1, 1}, - {&__pyx_n_s_n, __pyx_k_n, sizeof(__pyx_k_n), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_kp_u_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 1, 0, 0}, - {&__pyx_n_s_new_path, __pyx_k_new_path, sizeof(__pyx_k_new_path), 0, 0, 1, 1}, - {&__pyx_n_s_new_video, __pyx_k_new_video, sizeof(__pyx_k_new_video), 0, 0, 1, 1}, - {&__pyx_n_s_next, __pyx_k_next, sizeof(__pyx_k_next), 0, 0, 1, 1}, - {&__pyx_n_s_nf, __pyx_k_nf, sizeof(__pyx_k_nf), 0, 0, 1, 1}, - {&__pyx_n_s_ni, __pyx_k_ni, sizeof(__pyx_k_ni), 0, 0, 1, 1}, - {&__pyx_n_s_nn, __pyx_k_nn, sizeof(__pyx_k_nn), 0, 0, 1, 1}, - {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1}, - {&__pyx_n_s_npy, __pyx_k_npy, sizeof(__pyx_k_npy), 0, 0, 1, 1}, - {&__pyx_n_s_numpy, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1}, - {&__pyx_n_s_nv, __pyx_k_nv, sizeof(__pyx_k_nv), 0, 0, 1, 1}, - {&__pyx_n_s_open, __pyx_k_open, sizeof(__pyx_k_open), 0, 0, 1, 1}, - {&__pyx_n_s_orientation, __pyx_k_orientation, sizeof(__pyx_k_orientation), 0, 0, 1, 1}, - {&__pyx_n_s_os, __pyx_k_os, sizeof(__pyx_k_os), 0, 0, 1, 1}, - {&__pyx_n_s_out, __pyx_k_out, sizeof(__pyx_k_out), 0, 0, 1, 1}, - {&__pyx_n_s_p, __pyx_k_p, sizeof(__pyx_k_p), 0, 0, 1, 1}, - {&__pyx_n_s_padh, __pyx_k_padh, sizeof(__pyx_k_padh), 0, 0, 1, 1}, - {&__pyx_n_s_padw, __pyx_k_padw, sizeof(__pyx_k_padw), 0, 0, 1, 1}, - {&__pyx_n_s_padx, __pyx_k_padx, sizeof(__pyx_k_padx), 0, 0, 1, 1}, - {&__pyx_n_s_pady, __pyx_k_pady, sizeof(__pyx_k_pady), 0, 0, 1, 1}, - {&__pyx_n_s_parent, __pyx_k_parent, sizeof(__pyx_k_parent), 0, 0, 1, 1}, - {&__pyx_n_s_parents, __pyx_k_parents, sizeof(__pyx_k_parents), 0, 0, 1, 1}, - {&__pyx_n_s_path, __pyx_k_path, sizeof(__pyx_k_path), 0, 0, 1, 1}, - {&__pyx_n_s_pathlib, __pyx_k_pathlib, sizeof(__pyx_k_pathlib), 0, 0, 1, 1}, - {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {&__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils), 0, 0, 1, 1}, - {&__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_2), 0, 0, 1, 1}, - {&__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_3), 0, 0, 1, 1}, - {&__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_k_pdf_toolbox_lib_dia_yolov5_utils_4, sizeof(__pyx_k_pdf_toolbox_lib_dia_yolov5_utils_4), 0, 0, 1, 0}, - {&__pyx_n_u_png, __pyx_k_png, sizeof(__pyx_k_png), 0, 1, 0, 1}, - {&__pyx_n_s_prepare, __pyx_k_prepare, sizeof(__pyx_k_prepare), 0, 0, 1, 1}, - {&__pyx_n_s_print, __pyx_k_print, sizeof(__pyx_k_print), 0, 0, 1, 1}, - {&__pyx_n_s_qualname, __pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 0, 1, 1}, - {&__pyx_n_s_r, __pyx_k_r, sizeof(__pyx_k_r), 0, 0, 1, 1}, - {&__pyx_n_s_random, __pyx_k_random, sizeof(__pyx_k_random), 0, 0, 1, 1}, - {&__pyx_n_s_ravel, __pyx_k_ravel, sizeof(__pyx_k_ravel), 0, 0, 1, 1}, - {&__pyx_n_s_read, __pyx_k_read, sizeof(__pyx_k_read), 0, 0, 1, 1}, - {&__pyx_n_s_recursive, __pyx_k_recursive, sizeof(__pyx_k_recursive), 0, 0, 1, 1}, - {&__pyx_n_s_relative_to, __pyx_k_relative_to, sizeof(__pyx_k_relative_to), 0, 0, 1, 1}, - {&__pyx_n_s_release, __pyx_k_release, sizeof(__pyx_k_release), 0, 0, 1, 1}, - {&__pyx_n_s_repeat, __pyx_k_repeat, sizeof(__pyx_k_repeat), 0, 0, 1, 1}, - {&__pyx_n_s_reshape, __pyx_k_reshape, sizeof(__pyx_k_reshape), 0, 0, 1, 1}, - {&__pyx_n_s_resize, __pyx_k_resize, sizeof(__pyx_k_resize), 0, 0, 1, 1}, - {&__pyx_n_s_resolve, __pyx_k_resolve, sizeof(__pyx_k_resolve), 0, 0, 1, 1}, - {&__pyx_n_s_ret_val, __pyx_k_ret_val, sizeof(__pyx_k_ret_val), 0, 0, 1, 1}, - {&__pyx_n_s_rglob, __pyx_k_rglob, sizeof(__pyx_k_rglob), 0, 0, 1, 1}, - {&__pyx_n_s_rmtree, __pyx_k_rmtree, sizeof(__pyx_k_rmtree), 0, 0, 1, 1}, - {&__pyx_n_s_rotation, __pyx_k_rotation, sizeof(__pyx_k_rotation), 0, 0, 1, 1}, - {&__pyx_n_s_rsplit, __pyx_k_rsplit, sizeof(__pyx_k_rsplit), 0, 0, 1, 1}, - {&__pyx_n_s_s, __pyx_k_s, sizeof(__pyx_k_s), 0, 0, 1, 1}, - {&__pyx_n_s_sa, __pyx_k_sa, sizeof(__pyx_k_sa), 0, 0, 1, 1}, - {&__pyx_n_s_sb, __pyx_k_sb, sizeof(__pyx_k_sb), 0, 0, 1, 1}, - {&__pyx_n_s_seed, __pyx_k_seed, sizeof(__pyx_k_seed), 0, 0, 1, 1}, - {&__pyx_n_s_segments, __pyx_k_segments, sizeof(__pyx_k_segments), 0, 0, 1, 1}, - {&__pyx_n_s_segments4, __pyx_k_segments4, sizeof(__pyx_k_segments4), 0, 0, 1, 1}, - {&__pyx_n_s_segments9, __pyx_k_segments9, sizeof(__pyx_k_segments9), 0, 0, 1, 1}, - {&__pyx_n_s_self, __pyx_k_self, sizeof(__pyx_k_self), 0, 0, 1, 1}, - {&__pyx_n_s_send, __pyx_k_send, sizeof(__pyx_k_send), 0, 0, 1, 1}, - {&__pyx_n_s_sep, __pyx_k_sep, sizeof(__pyx_k_sep), 0, 0, 1, 1}, - {&__pyx_n_s_set_name, __pyx_k_set_name, sizeof(__pyx_k_set_name), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_shuffle, __pyx_k_shuffle, sizeof(__pyx_k_shuffle), 0, 0, 1, 1}, - {&__pyx_n_s_shutil, __pyx_k_shutil, sizeof(__pyx_k_shutil), 0, 0, 1, 1}, - {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {&__pyx_n_s_spec, __pyx_k_spec, sizeof(__pyx_k_spec), 0, 0, 1, 1}, - {&__pyx_n_s_split, __pyx_k_split, sizeof(__pyx_k_split), 0, 0, 1, 1}, - {&__pyx_n_s_splitlines, __pyx_k_splitlines, sizeof(__pyx_k_splitlines), 0, 0, 1, 1}, - {&__pyx_n_s_stem, __pyx_k_stem, sizeof(__pyx_k_stem), 0, 0, 1, 1}, - {&__pyx_n_s_stride, __pyx_k_stride, sizeof(__pyx_k_stride), 0, 0, 1, 1}, - {&__pyx_n_s_strip, __pyx_k_strip, sizeof(__pyx_k_strip), 0, 0, 1, 1}, - {&__pyx_n_s_suffix, __pyx_k_suffix, sizeof(__pyx_k_suffix), 0, 0, 1, 1}, - {&__pyx_n_s_sum, __pyx_k_sum, sizeof(__pyx_k_sum), 0, 0, 1, 1}, - {&__pyx_n_s_super, __pyx_k_super, sizeof(__pyx_k_super), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_n_s_threading, __pyx_k_threading, sizeof(__pyx_k_threading), 0, 0, 1, 1}, - {&__pyx_n_s_throw, __pyx_k_throw, sizeof(__pyx_k_throw), 0, 0, 1, 1}, - {&__pyx_n_u_tif, __pyx_k_tif, sizeof(__pyx_k_tif), 0, 1, 0, 1}, - {&__pyx_n_u_tiff, __pyx_k_tiff, sizeof(__pyx_k_tiff), 0, 1, 0, 1}, - {&__pyx_n_s_time, __pyx_k_time, sizeof(__pyx_k_time), 0, 0, 1, 1}, - {&__pyx_n_s_tobytes, __pyx_k_tobytes, sizeof(__pyx_k_tobytes), 0, 0, 1, 1}, - {&__pyx_n_s_torch, __pyx_k_torch, sizeof(__pyx_k_torch), 0, 0, 1, 1}, - {&__pyx_n_s_torch_nn_functional, __pyx_k_torch_nn_functional, sizeof(__pyx_k_torch_nn_functional), 0, 0, 1, 1}, - {&__pyx_n_s_total, __pyx_k_total, sizeof(__pyx_k_total), 0, 0, 1, 1}, - {&__pyx_n_s_tqdm, __pyx_k_tqdm, sizeof(__pyx_k_tqdm), 0, 0, 1, 1}, - {&__pyx_n_s_transpose, __pyx_k_transpose, sizeof(__pyx_k_transpose), 0, 0, 1, 1}, - {&__pyx_kp_u_txt, __pyx_k_txt, sizeof(__pyx_k_txt), 0, 1, 0, 0}, - {&__pyx_n_s_txt_2, __pyx_k_txt_2, sizeof(__pyx_k_txt_2), 0, 0, 1, 1}, - {&__pyx_n_s_uint8, __pyx_k_uint8, sizeof(__pyx_k_uint8), 0, 0, 1, 1}, - {&__pyx_n_s_uniform, __pyx_k_uniform, sizeof(__pyx_k_uniform), 0, 0, 1, 1}, - {&__pyx_n_s_unlink, __pyx_k_unlink, sizeof(__pyx_k_unlink), 0, 0, 1, 1}, - {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {&__pyx_kp_u_using_txt_labeled_images_only, __pyx_k_using_txt_labeled_images_only, sizeof(__pyx_k_using_txt_labeled_images_only), 0, 1, 0, 0}, - {&__pyx_n_u_video, __pyx_k_video, sizeof(__pyx_k_video), 0, 1, 0, 1}, - {&__pyx_kp_u_video_2, __pyx_k_video_2, sizeof(__pyx_k_video_2), 0, 1, 0, 0}, - {&__pyx_n_s_video_flag, __pyx_k_video_flag, sizeof(__pyx_k_video_flag), 0, 0, 1, 1}, - {&__pyx_kp_u_videos, __pyx_k_videos, sizeof(__pyx_k_videos), 0, 1, 0, 0}, - {&__pyx_n_s_videos_2, __pyx_k_videos_2, sizeof(__pyx_k_videos_2), 0, 0, 1, 1}, - {&__pyx_n_s_w, __pyx_k_w, sizeof(__pyx_k_w), 0, 0, 1, 1}, - {&__pyx_n_s_w0, __pyx_k_w0, sizeof(__pyx_k_w0), 0, 0, 1, 1}, - {&__pyx_n_u_webp, __pyx_k_webp, sizeof(__pyx_k_webp), 0, 1, 0, 1}, - {&__pyx_n_s_weights, __pyx_k_weights, sizeof(__pyx_k_weights), 0, 0, 1, 1}, - {&__pyx_n_u_wmv, __pyx_k_wmv, sizeof(__pyx_k_wmv), 0, 1, 0, 1}, - {&__pyx_n_s_wp, __pyx_k_wp, sizeof(__pyx_k_wp), 0, 0, 1, 1}, - {&__pyx_n_s_write, __pyx_k_write, sizeof(__pyx_k_write), 0, 0, 1, 1}, - {&__pyx_n_s_x, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1}, - {&__pyx_n_s_x1, __pyx_k_x1, sizeof(__pyx_k_x1), 0, 0, 1, 1}, - {&__pyx_n_s_x1a, __pyx_k_x1a, sizeof(__pyx_k_x1a), 0, 0, 1, 1}, - {&__pyx_n_s_x1b, __pyx_k_x1b, sizeof(__pyx_k_x1b), 0, 0, 1, 1}, - {&__pyx_n_s_x2, __pyx_k_x2, sizeof(__pyx_k_x2), 0, 0, 1, 1}, - {&__pyx_n_s_x2a, __pyx_k_x2a, sizeof(__pyx_k_x2a), 0, 0, 1, 1}, - {&__pyx_n_s_x2b, __pyx_k_x2b, sizeof(__pyx_k_x2b), 0, 0, 1, 1}, - {&__pyx_n_s_xc, __pyx_k_xc, sizeof(__pyx_k_xc), 0, 0, 1, 1}, - {&__pyx_n_s_xyn2xy, __pyx_k_xyn2xy, sizeof(__pyx_k_xyn2xy), 0, 0, 1, 1}, - {&__pyx_n_s_xywh2xyxy, __pyx_k_xywh2xyxy, sizeof(__pyx_k_xywh2xyxy), 0, 0, 1, 1}, - {&__pyx_n_s_xywhn2xyxy, __pyx_k_xywhn2xyxy, sizeof(__pyx_k_xywhn2xyxy), 0, 0, 1, 1}, - {&__pyx_n_s_y1, __pyx_k_y1, sizeof(__pyx_k_y1), 0, 0, 1, 1}, - {&__pyx_n_s_y1a, __pyx_k_y1a, sizeof(__pyx_k_y1a), 0, 0, 1, 1}, - {&__pyx_n_s_y1b, __pyx_k_y1b, sizeof(__pyx_k_y1b), 0, 0, 1, 1}, - {&__pyx_n_s_y2, __pyx_k_y2, sizeof(__pyx_k_y2), 0, 0, 1, 1}, - {&__pyx_n_s_y2a, __pyx_k_y2a, sizeof(__pyx_k_y2a), 0, 0, 1, 1}, - {&__pyx_n_s_y2b, __pyx_k_y2b, sizeof(__pyx_k_y2b), 0, 0, 1, 1}, - {&__pyx_n_s_yaml, __pyx_k_yaml, sizeof(__pyx_k_yaml), 0, 0, 1, 1}, - {&__pyx_n_s_yc, __pyx_k_yc, sizeof(__pyx_k_yc), 0, 0, 1, 1}, - {&__pyx_n_s_zip, __pyx_k_zip, sizeof(__pyx_k_zip), 0, 0, 1, 1}, - {&__pyx_n_s_zipfile, __pyx_k_zipfile, sizeof(__pyx_k_zipfile), 0, 0, 1, 1}, - #endif - {0, 0, 0, 0, 0, 0, 0} -}; -/* #### Code section: cached_builtins ### */ -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_sum = __Pyx_GetBuiltinName(__pyx_n_s_sum); if (!__pyx_builtin_sum) __PYX_ERR(0, 44, __pyx_L1_error) - __pyx_builtin_any = __Pyx_GetBuiltinName(__pyx_n_s_any); if (!__pyx_builtin_any) __PYX_ERR(0, 114, __pyx_L1_error) - __pyx_builtin_AssertionError = __Pyx_GetBuiltinName(__pyx_n_s_AssertionError); if (!__pyx_builtin_AssertionError) __PYX_ERR(0, 118, __pyx_L1_error) - __pyx_builtin_StopIteration = __Pyx_GetBuiltinName(__pyx_n_s_StopIteration); if (!__pyx_builtin_StopIteration) __PYX_ERR(0, 127, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(0, 207, __pyx_L1_error) - __pyx_builtin_open = __Pyx_GetBuiltinName(__pyx_n_s_open); if (!__pyx_builtin_open) __PYX_ERR(0, 343, __pyx_L1_error) - __pyx_builtin_print = __Pyx_GetBuiltinName(__pyx_n_s_print); if (!__pyx_builtin_print) __PYX_ERR(0, 379, __pyx_L1_error) - __pyx_builtin_zip = __Pyx_GetBuiltinName(__pyx_n_s_zip); if (!__pyx_builtin_zip) __PYX_ERR(0, 380, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: cached_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":74 - * """ - * exif = image.getexif() - * orientation = exif.get(0x0112, 1) # default 1 # <<<<<<<<<<<<<< - * if orientation > 1: - * method = {2: Image.FLIP_LEFT_RIGHT, - */ - __pyx_tuple__2 = PyTuple_Pack(2, __pyx_int_274, __pyx_int_1); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(0, 74, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__2); - __Pyx_GIVEREF(__pyx_tuple__2); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":158 - * - * # Convert - * img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB # <<<<<<<<<<<<<< - * img = np.ascontiguousarray(img) - * - */ - __pyx_tuple__11 = PyTuple_Pack(3, __pyx_int_2, __pyx_int_0, __pyx_int_1); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(0, 158, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - __pyx_slice__12 = PySlice_New(Py_None, Py_None, __pyx_int_neg_1); if (unlikely(!__pyx_slice__12)) __PYX_ERR(0, 158, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__12); - __Pyx_GIVEREF(__pyx_slice__12); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":175 - * # Define label paths as a function of image paths - * sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - * return [sb.join(x.rsplit(sa, 1)).rsplit('.', 1)[0] + '.txt' for x in img_paths] # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__13 = PyTuple_Pack(2, __pyx_kp_u__5, __pyx_int_1); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(0, 175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__13); - __Pyx_GIVEREF(__pyx_tuple__13); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":190 - * im = cv2.imread(path) # BGR - * assert im is not None, f'Image Not Found {path}' - * h0, w0 = im.shape[:2] # orig hw # <<<<<<<<<<<<<< - * r = self.img_size / max(h0, w0) # ratio - * if r != 1: # if sizes are not equal - */ - __pyx_slice__14 = PySlice_New(Py_None, __pyx_int_2, Py_None); if (unlikely(!__pyx_slice__14)) __PYX_ERR(0, 190, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__14); - __Pyx_GIVEREF(__pyx_slice__14); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":233 - * labels, segments = self.labels[index].copy(), self.segments[index].copy() - * if labels.size: - * labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format # <<<<<<<<<<<<<< - * segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - * labels4.append(labels) - */ - __pyx_slice__15 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__15)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__15); - __Pyx_GIVEREF(__pyx_slice__15); - __pyx_slice__16 = PySlice_New(__pyx_int_1, Py_None, Py_None); if (unlikely(!__pyx_slice__16)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - __pyx_tuple__17 = PyTuple_Pack(2, __pyx_slice__15, __pyx_slice__16); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":337 - * if im_file.suffix[1:] in IMG_FORMATS: - * # image - * im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB # <<<<<<<<<<<<<< - * h, w = im.shape[:2] - * - */ - __pyx_tuple__19 = PyTuple_Pack(2, Py_Ellipsis, __pyx_slice__12); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(0, 337, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":343 - * lb_file = Path(img2label_paths([str(im_file)])[0]) - * if Path(lb_file).exists(): - * with open(lb_file) as f: # <<<<<<<<<<<<<< - * lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - * - */ - __pyx_tuple__20 = PyTuple_Pack(3, Py_None, Py_None, Py_None); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(0, 343, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__20); - __Pyx_GIVEREF(__pyx_tuple__20); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":354 - * b = x[1:] * [w, h, w, h] # box - * # b[2:] = b[2:].max() # rectangle to square - * b[2:] = b[2:] * 1.2 + 3 # pad # <<<<<<<<<<<<<< - * b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - * - */ - __pyx_slice__22 = PySlice_New(__pyx_int_2, Py_None, Py_None); if (unlikely(!__pyx_slice__22)) __PYX_ERR(0, 354, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__22); - __Pyx_GIVEREF(__pyx_slice__22); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":355 - * # b[2:] = b[2:].max() # rectangle to square - * b[2:] = b[2:] * 1.2 + 3 # pad - * b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) # <<<<<<<<<<<<<< - * - * b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - */ - __pyx_tuple__23 = PyTuple_Pack(2, __pyx_int_neg_1, __pyx_int_4); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(0, 355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__23); - __Pyx_GIVEREF(__pyx_tuple__23); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":362 - * - * - * def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False): # <<<<<<<<<<<<<< - * """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - * Usage: from utils.datasets import *; autosplit() - */ - __pyx_tuple__24 = PyTuple_Pack(3, __pyx_float_0_9, __pyx_float_0_1, __pyx_float_0_0); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(0, 362, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__24); - __Pyx_GIVEREF(__pyx_tuple__24); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":23 - * import numpy as np - * import torch - * import torch.nn.functional as F # <<<<<<<<<<<<<< - * import yaml - * from PIL import ExifTags, Image, ImageOps - */ - __pyx_tuple__27 = PyTuple_Pack(3, __pyx_n_s_torch, __pyx_n_s_nn, __pyx_n_s_functional); if (unlikely(!__pyx_tuple__27)) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__27); - __Pyx_GIVEREF(__pyx_tuple__27); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":42 - * - * - * def get_hash(paths): # <<<<<<<<<<<<<< - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - */ - __pyx_tuple__28 = PyTuple_Pack(5, __pyx_n_s_paths, __pyx_n_s_size, __pyx_n_s_h, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__28)) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__28); - __Pyx_GIVEREF(__pyx_tuple__28); - __pyx_codeobj__29 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__28, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_get_hash, 42, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__29)) __PYX_ERR(0, 42, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":50 - * - * - * def exif_size(img): # <<<<<<<<<<<<<< - * # Returns exif-corrected PIL size - * s = img.size # (width, height) - */ - __pyx_tuple__30 = PyTuple_Pack(3, __pyx_n_s_img, __pyx_n_s_s, __pyx_n_s_rotation); if (unlikely(!__pyx_tuple__30)) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__30); - __Pyx_GIVEREF(__pyx_tuple__30); - __pyx_codeobj__31 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__30, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_exif_size, 50, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__31)) __PYX_ERR(0, 50, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":65 - * - * - * def exif_transpose(image): # <<<<<<<<<<<<<< - * """ - * Transpose a PIL image accordingly if it has an EXIF Orientation tag. - */ - __pyx_tuple__32 = PyTuple_Pack(4, __pyx_n_s_image, __pyx_n_s_exif, __pyx_n_s_orientation, __pyx_n_s_method); if (unlikely(!__pyx_tuple__32)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__32); - __Pyx_GIVEREF(__pyx_tuple__32); - __pyx_codeobj__33 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__32, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_exif_transpose, 65, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__33)) __PYX_ERR(0, 65, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":92 - * class LoadImages: - * # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4` - * def __init__(self, path, img_size=640, stride=32, auto=True): # <<<<<<<<<<<<<< - * p = str(Path(path).resolve()) # os-agnostic absolute path - * if '*' in p: - */ - __pyx_tuple__34 = PyTuple_Pack(13, __pyx_n_s_self, __pyx_n_s_path, __pyx_n_s_img_size, __pyx_n_s_stride, __pyx_n_s_auto, __pyx_n_s_p, __pyx_n_s_files, __pyx_n_s_images, __pyx_n_s_videos_2, __pyx_n_s_ni, __pyx_n_s_nv, __pyx_n_s_x, __pyx_n_s_x); if (unlikely(!__pyx_tuple__34)) __PYX_ERR(0, 92, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__34); - __Pyx_GIVEREF(__pyx_tuple__34); - __pyx_codeobj__35 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 13, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__34, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_init, 92, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__35)) __PYX_ERR(0, 92, __pyx_L1_error) - __pyx_tuple__36 = PyTuple_Pack(3, ((PyObject *)__pyx_int_640), ((PyObject *)__pyx_int_32), ((PyObject *)Py_True)); if (unlikely(!__pyx_tuple__36)) __PYX_ERR(0, 92, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__36); - __Pyx_GIVEREF(__pyx_tuple__36); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":121 - * f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' - * - * def __iter__(self): # <<<<<<<<<<<<<< - * self.count = 0 - * return self - */ - __pyx_tuple__37 = PyTuple_Pack(1, __pyx_n_s_self); if (unlikely(!__pyx_tuple__37)) __PYX_ERR(0, 121, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__37); - __Pyx_GIVEREF(__pyx_tuple__37); - __pyx_codeobj__38 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__37, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_iter, 121, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__38)) __PYX_ERR(0, 121, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":125 - * return self - * - * def __next__(self): # <<<<<<<<<<<<<< - * if self.count == self.nf: - * raise StopIteration - */ - __pyx_tuple__39 = PyTuple_Pack(6, __pyx_n_s_self, __pyx_n_s_path, __pyx_n_s_ret_val, __pyx_n_s_img0, __pyx_n_s_s, __pyx_n_s_img); if (unlikely(!__pyx_tuple__39)) __PYX_ERR(0, 125, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__39); - __Pyx_GIVEREF(__pyx_tuple__39); - __pyx_codeobj__40 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__39, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_next, 125, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__40)) __PYX_ERR(0, 125, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":163 - * return path, img, img0, self.cap, s - * - * def new_video(self, path): # <<<<<<<<<<<<<< - * self.frame = 0 - * self.cap = cv2.VideoCapture(path) - */ - __pyx_tuple__41 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_path); if (unlikely(!__pyx_tuple__41)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__41); - __Pyx_GIVEREF(__pyx_tuple__41); - __pyx_codeobj__42 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__41, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_new_video, 163, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__42)) __PYX_ERR(0, 163, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":168 - * self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self.nf # number of files - * - */ - __pyx_codeobj__43 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__37, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_len, 168, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__43)) __PYX_ERR(0, 168, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":172 - * - * - * def img2label_paths(img_paths): # <<<<<<<<<<<<<< - * # Define label paths as a function of image paths - * sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - */ - __pyx_tuple__44 = PyTuple_Pack(4, __pyx_n_s_img_paths, __pyx_n_s_sa, __pyx_n_s_sb, __pyx_n_s_x); if (unlikely(!__pyx_tuple__44)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__44); - __Pyx_GIVEREF(__pyx_tuple__44); - __pyx_codeobj__45 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__44, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_img2label_paths, 172, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__45)) __PYX_ERR(0, 172, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":179 - * - * # Ancillary functions -------------------------------------------------------------------------------------------------- - * def load_image(self, i): # <<<<<<<<<<<<<< - * # loads 1 image from dataset index 'i', returns im, original hw, resized hw - * im = self.imgs[i] - */ - __pyx_tuple__46 = PyTuple_Pack(8, __pyx_n_s_self, __pyx_n_s_i, __pyx_n_s_im, __pyx_n_s_npy, __pyx_n_s_path, __pyx_n_s_h0, __pyx_n_s_w0, __pyx_n_s_r); if (unlikely(!__pyx_tuple__46)) __PYX_ERR(0, 179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__46); - __Pyx_GIVEREF(__pyx_tuple__46); - __pyx_codeobj__47 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 8, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__46, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_load_image, 179, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__47)) __PYX_ERR(0, 179, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":200 - * - * - * def load_mosaic(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - * labels4, segments4 = [], [] - */ - __pyx_tuple__48 = PyTuple_Pack(30, __pyx_n_s_self, __pyx_n_s_index, __pyx_n_s_labels4, __pyx_n_s_segments4, __pyx_n_s_s, __pyx_n_s_yc, __pyx_n_s_xc, __pyx_n_s_indices, __pyx_n_s_i, __pyx_n_s_img, __pyx_n_s__21, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_img4, __pyx_n_s_x1a, __pyx_n_s_y1a, __pyx_n_s_x2a, __pyx_n_s_y2a, __pyx_n_s_x1b, __pyx_n_s_y1b, __pyx_n_s_x2b, __pyx_n_s_y2b, __pyx_n_s_padw, __pyx_n_s_padh, __pyx_n_s_labels, __pyx_n_s_segments, __pyx_n_s_x, __pyx_n_s_genexpr, __pyx_n_s_genexpr, __pyx_n_s_x); if (unlikely(!__pyx_tuple__48)) __PYX_ERR(0, 200, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__48); - __Pyx_GIVEREF(__pyx_tuple__48); - __pyx_codeobj__49 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 30, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__48, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_load_mosaic, 200, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__49)) __PYX_ERR(0, 200, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":247 - * - * - * def load_mosaic9(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - * labels9, segments9 = [], [] - */ - __pyx_tuple__50 = PyTuple_Pack(33, __pyx_n_s_self, __pyx_n_s_index, __pyx_n_s_labels9, __pyx_n_s_segments9, __pyx_n_s_s, __pyx_n_s_indices, __pyx_n_s_i, __pyx_n_s_img, __pyx_n_s__21, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_img9, __pyx_n_s_h0, __pyx_n_s_w0, __pyx_n_s_c, __pyx_n_s_padx, __pyx_n_s_pady, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_labels, __pyx_n_s_segments, __pyx_n_s_hp, __pyx_n_s_wp, __pyx_n_s_yc, __pyx_n_s_xc, __pyx_n_s_x, __pyx_n_s_genexpr, __pyx_n_s_genexpr, __pyx_n_s_x, __pyx_n_s_genexpr, __pyx_n_s_x); if (unlikely(!__pyx_tuple__50)) __PYX_ERR(0, 247, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__50); - __Pyx_GIVEREF(__pyx_tuple__50); - __pyx_codeobj__51 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 33, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__50, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_load_mosaic9, 247, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__51)) __PYX_ERR(0, 247, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":313 - * - * - * def create_folder(path='./new'): # <<<<<<<<<<<<<< - * # Create folder - * if os.path.exists(path): - */ - __pyx_tuple__52 = PyTuple_Pack(1, __pyx_n_s_path); if (unlikely(!__pyx_tuple__52)) __PYX_ERR(0, 313, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__52); - __Pyx_GIVEREF(__pyx_tuple__52); - __pyx_codeobj__53 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__52, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_create_folder, 313, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__53)) __PYX_ERR(0, 313, __pyx_L1_error) - __pyx_tuple__54 = PyTuple_Pack(1, ((PyObject*)__pyx_kp_u_new)); if (unlikely(!__pyx_tuple__54)) __PYX_ERR(0, 313, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__54); - __Pyx_GIVEREF(__pyx_tuple__54); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":320 - * - * - * def flatten_recursive(path='../datasets/coco128'): # <<<<<<<<<<<<<< - * # Flatten a recursive directory by bringing all files to top level - * new_path = Path(path + '_flat') - */ - __pyx_tuple__55 = PyTuple_Pack(3, __pyx_n_s_path, __pyx_n_s_new_path, __pyx_n_s_file); if (unlikely(!__pyx_tuple__55)) __PYX_ERR(0, 320, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__55); - __Pyx_GIVEREF(__pyx_tuple__55); - __pyx_codeobj__56 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__55, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_flatten_recursive, 320, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__56)) __PYX_ERR(0, 320, __pyx_L1_error) - __pyx_tuple__57 = PyTuple_Pack(1, ((PyObject*)__pyx_kp_u_datasets_coco128)); if (unlikely(!__pyx_tuple__57)) __PYX_ERR(0, 320, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__57); - __Pyx_GIVEREF(__pyx_tuple__57); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":328 - * - * - * def extract_boxes(path='../datasets/coco128'): # from utils.datasets import *; extract_boxes() # <<<<<<<<<<<<<< - * # Convert detection dataset into classification dataset, with one directory per class - * path = Path(path) # images dir - */ - __pyx_tuple__58 = PyTuple_Pack(15, __pyx_n_s_path, __pyx_n_s_files, __pyx_n_s_n, __pyx_n_s_im_file, __pyx_n_s_im, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_lb_file, __pyx_n_s_f, __pyx_n_s_lb, __pyx_n_s_j, __pyx_n_s_x, __pyx_n_s_c, __pyx_n_s_b, __pyx_n_s_x); if (unlikely(!__pyx_tuple__58)) __PYX_ERR(0, 328, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__58); - __Pyx_GIVEREF(__pyx_tuple__58); - __pyx_codeobj__59 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 15, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__58, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_extract_boxes, 328, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__59)) __PYX_ERR(0, 328, __pyx_L1_error) - __pyx_tuple__60 = PyTuple_Pack(1, ((PyObject*)__pyx_kp_u_datasets_coco128)); if (unlikely(!__pyx_tuple__60)) __PYX_ERR(0, 328, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__60); - __Pyx_GIVEREF(__pyx_tuple__60); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":362 - * - * - * def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False): # <<<<<<<<<<<<<< - * """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - * Usage: from utils.datasets import *; autosplit() - */ - __pyx_tuple__61 = PyTuple_Pack(13, __pyx_n_s_path, __pyx_n_s_weights, __pyx_n_s_annotated_only, __pyx_n_s_files, __pyx_n_s_n, __pyx_n_s_indices, __pyx_n_s_txt_2, __pyx_n_s_i, __pyx_n_s_img, __pyx_n_s_f, __pyx_n_s_genexpr, __pyx_n_s_genexpr, __pyx_n_s_x); if (unlikely(!__pyx_tuple__61)) __PYX_ERR(0, 362, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__61); - __Pyx_GIVEREF(__pyx_tuple__61); - __pyx_codeobj__62 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 13, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__61, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4, __pyx_n_s_autosplit, 362, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__62)) __PYX_ERR(0, 362, __pyx_L1_error) - __pyx_tuple__63 = PyTuple_Pack(3, ((PyObject*)__pyx_kp_u_datasets_coco128_images), ((PyObject*)__pyx_tuple__24), ((PyObject *)Py_False)); if (unlikely(!__pyx_tuple__63)) __PYX_ERR(0, 362, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__63); - __Pyx_GIVEREF(__pyx_tuple__63); - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} -/* #### Code section: init_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitConstants(void) { - __pyx_umethod_PyDict_Type_get.type = (PyObject*)&PyDict_Type; - __pyx_umethod_PyDict_Type_get.method_name = &__pyx_n_s_get; - #if CYTHON_USE_MODULE_STATE - if (__Pyx_InitString(__pyx_string_tab[0], &__pyx_kp_u_) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[1], &__pyx_n_s_AssertionError) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[2], &__pyx_kp_u_Autosplitting_images_from) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[3], &__pyx_n_s_CAP_PROP_FRAME_COUNT) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[4], &__pyx_kp_u_ERROR) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[5], &__pyx_n_s_ExifTags) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[6], &__pyx_n_s_F) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[7], &__pyx_n_s_FLIP_LEFT_RIGHT) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[8], &__pyx_n_s_FLIP_TOP_BOTTOM) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[9], &__pyx_n_s_HELP_URL) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[10], &__pyx_n_s_IMG_FORMATS) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[11], &__pyx_n_s_INTER_AREA) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[12], &__pyx_n_s_INTER_LINEAR) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[13], &__pyx_n_s_Image) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[14], &__pyx_n_s_ImageOps) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[15], &__pyx_kp_u_Image_Not_Found) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[16], &__pyx_n_s_LoadImages) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[17], &__pyx_n_s_LoadImages___init) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[18], &__pyx_n_s_LoadImages___iter) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[19], &__pyx_n_s_LoadImages___len) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[20], &__pyx_n_s_LoadImages___next) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[21], &__pyx_n_s_LoadImages_new_video) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[22], &__pyx_kp_u_No_images_or_videos_found_in) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[23], &__pyx_n_u_Orientation) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[24], &__pyx_n_s_PIL) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[25], &__pyx_n_s_Path) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[26], &__pyx_n_s_Pool) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[27], &__pyx_n_s_ROTATE_180) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[28], &__pyx_n_s_ROTATE_270) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[29], &__pyx_n_s_ROTATE_90) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[30], &__pyx_n_s_StopIteration) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[31], &__pyx_kp_u_Supported_formats_are_images) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[32], &__pyx_n_s_TAGS) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[33], &__pyx_n_s_TRANSPOSE) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[34], &__pyx_n_s_TRANSVERSE) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[35], &__pyx_n_s_Thread) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[36], &__pyx_n_s_ThreadPool) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[37], &__pyx_n_s_VID_FORMATS) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[38], &__pyx_n_s_VideoCapture) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[39], &__pyx_n_s_ZipFile) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[40], &__pyx_kp_u__10) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[41], &__pyx_kp_u__18) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[42], &__pyx_n_s__21) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[43], &__pyx_n_u__21) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[44], &__pyx_kp_u__25) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[45], &__pyx_kp_u__26) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[46], &__pyx_n_s__3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[47], &__pyx_kp_u__3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[48], &__pyx_kp_u__4) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[49], &__pyx_kp_u__5) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[50], &__pyx_kp_u__6) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[51], &__pyx_n_s__64) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[52], &__pyx_kp_u__7) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[53], &__pyx_kp_u__8) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[54], &__pyx_kp_u__9) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[55], &__pyx_n_u_a) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[56], &__pyx_n_s_annotated_only) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[57], &__pyx_n_s_any) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[58], &__pyx_n_s_append) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[59], &__pyx_n_s_args) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[60], &__pyx_n_s_array) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[61], &__pyx_n_s_as_posix) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[62], &__pyx_n_s_ascontiguousarray) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[63], &__pyx_n_u_asf) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[64], &__pyx_n_s_astype) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[65], &__pyx_n_s_asyncio_coroutines) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[66], &__pyx_n_s_augment) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[67], &__pyx_n_s_auto) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[68], &__pyx_n_s_autosplit) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[69], &__pyx_n_s_autosplit_locals_genexpr) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[70], &__pyx_kp_u_autosplit_test_txt) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[71], &__pyx_kp_u_autosplit_train_txt) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[72], &__pyx_kp_u_autosplit_val_txt) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[73], &__pyx_n_u_avi) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[74], &__pyx_n_s_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[75], &__pyx_n_u_bmp) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[76], &__pyx_kp_u_box_failure_in) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[77], &__pyx_n_s_c) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[78], &__pyx_n_s_cap) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[79], &__pyx_n_s_choices) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[80], &__pyx_n_s_class_getitem) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[81], &__pyx_n_u_classifier) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[82], &__pyx_n_s_cline_in_traceback) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[83], &__pyx_n_s_clip) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[84], &__pyx_n_s_close) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[85], &__pyx_n_s_concatenate) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[86], &__pyx_n_s_copy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[87], &__pyx_n_s_copyfile) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[88], &__pyx_n_s_count) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[89], &__pyx_n_s_create_folder) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[90], &__pyx_n_s_cv2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[91], &__pyx_kp_u_datasets_coco128) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[92], &__pyx_kp_u_datasets_coco128_images) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[93], &__pyx_n_s_dict) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[94], &__pyx_kp_u_disable) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[95], &__pyx_n_u_dng) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[96], &__pyx_n_s_doc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[97], &__pyx_kp_u_does_not_exist) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[98], &__pyx_n_s_dtype) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[99], &__pyx_kp_u_enable) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[100], &__pyx_n_s_encode) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[101], &__pyx_n_s_enter) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[102], &__pyx_n_s_enumerate) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[103], &__pyx_n_s_exif) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[104], &__pyx_n_u_exif) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[105], &__pyx_n_s_exif_size) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[106], &__pyx_n_s_exif_transpose) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[107], &__pyx_n_s_exists) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[108], &__pyx_n_s_exit) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[109], &__pyx_n_s_extract_boxes) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[110], &__pyx_n_s_f) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[111], &__pyx_n_s_file) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[112], &__pyx_n_s_files) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[113], &__pyx_n_u_flat) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[114], &__pyx_n_s_flatten_recursive) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[115], &__pyx_n_s_float32) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[116], &__pyx_n_s_frame) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[117], &__pyx_n_s_frames) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[118], &__pyx_n_s_full) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[119], &__pyx_n_s_functional) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[120], &__pyx_kp_u_gc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[121], &__pyx_n_s_genexpr) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[122], &__pyx_n_s_get) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[123], &__pyx_n_s_get_hash) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[124], &__pyx_n_s_get_hash_locals_genexpr) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[125], &__pyx_n_s_getexif) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[126], &__pyx_n_s_getexif_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[127], &__pyx_n_s_getsize) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[128], &__pyx_n_u_gif) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[129], &__pyx_n_s_glob) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[130], &__pyx_n_s_h) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[131], &__pyx_n_s_h0) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[132], &__pyx_n_s_hashlib) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[133], &__pyx_n_s_hexdigest) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[134], &__pyx_n_s_hp) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[135], &__pyx_kp_u_https_github_com_ultralytics_yol) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[136], &__pyx_n_s_i) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[137], &__pyx_n_s_im) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[138], &__pyx_n_s_im_file) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[139], &__pyx_n_s_image) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[140], &__pyx_n_u_image) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[141], &__pyx_kp_u_image_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[142], &__pyx_n_s_images) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[143], &__pyx_n_u_images) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[144], &__pyx_n_s_img) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[145], &__pyx_n_s_img0) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[146], &__pyx_n_s_img2label_paths) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[147], &__pyx_n_s_img4) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[148], &__pyx_n_s_img9) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[149], &__pyx_n_s_img_files) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[150], &__pyx_n_s_img_hw) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[151], &__pyx_n_s_img_hw0) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[152], &__pyx_n_s_img_npy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[153], &__pyx_n_s_img_paths) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[154], &__pyx_n_s_img_size) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[155], &__pyx_n_s_imgs) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[156], &__pyx_n_s_import) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[157], &__pyx_n_s_imread) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[158], &__pyx_n_s_imwrite) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[159], &__pyx_n_s_index) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[160], &__pyx_n_s_indices) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[161], &__pyx_n_s_info) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[162], &__pyx_n_s_init) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[163], &__pyx_n_s_init_subclass) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[164], &__pyx_n_s_initializing) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[165], &__pyx_n_s_int) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[166], &__pyx_n_s_interpolation) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[167], &__pyx_n_s_is_coroutine) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[168], &__pyx_n_s_is_dir) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[169], &__pyx_n_s_isdir) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[170], &__pyx_kp_u_isenabled) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[171], &__pyx_n_s_isfile) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[172], &__pyx_n_s_items) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[173], &__pyx_n_s_iter) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[174], &__pyx_n_s_itertools) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[175], &__pyx_n_s_j) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[176], &__pyx_n_s_join) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[177], &__pyx_n_u_jpeg) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[178], &__pyx_kp_u_jpg) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[179], &__pyx_n_u_jpg_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[180], &__pyx_n_s_json) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[181], &__pyx_n_s_k) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[182], &__pyx_n_s_keys) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[183], &__pyx_n_s_labels) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[184], &__pyx_n_u_labels) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[185], &__pyx_n_s_labels4) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[186], &__pyx_n_s_labels9) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[187], &__pyx_n_s_lb) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[188], &__pyx_n_s_lb_file) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[189], &__pyx_n_s_len) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[190], &__pyx_n_s_letterbox) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[191], &__pyx_n_s_load) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[192], &__pyx_n_s_load_image) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[193], &__pyx_n_s_load_mosaic) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[194], &__pyx_n_s_load_mosaic9) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[195], &__pyx_n_s_load_mosaic9_locals_genexpr) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[196], &__pyx_n_s_load_mosaic_locals_genexpr) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[197], &__pyx_n_s_lower) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[198], &__pyx_n_u_m4v) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[199], &__pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[200], &__pyx_n_s_makedirs) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[201], &__pyx_n_s_math) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[202], &__pyx_n_s_md5) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[203], &__pyx_n_s_metaclass) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[204], &__pyx_n_s_method) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[205], &__pyx_n_s_missing_ok) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[206], &__pyx_n_s_mkdir) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[207], &__pyx_n_u_mkv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[208], &__pyx_n_s_mode) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[209], &__pyx_n_s_module) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[210], &__pyx_n_s_mosaic_border) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[211], &__pyx_n_u_mov) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[212], &__pyx_n_u_mp4) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[213], &__pyx_n_u_mpeg) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[214], &__pyx_n_u_mpg) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[215], &__pyx_n_u_mpo) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[216], &__pyx_n_s_multiprocessing_pool) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[217], &__pyx_n_s_n) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[218], &__pyx_n_s_name) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[219], &__pyx_n_s_name_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[220], &__pyx_kp_u_new) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[221], &__pyx_n_s_new_path) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[222], &__pyx_n_s_new_video) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[223], &__pyx_n_s_next) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[224], &__pyx_n_s_nf) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[225], &__pyx_n_s_ni) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[226], &__pyx_n_s_nn) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[227], &__pyx_n_s_np) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[228], &__pyx_n_s_npy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[229], &__pyx_n_s_numpy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[230], &__pyx_n_s_nv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[231], &__pyx_n_s_open) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[232], &__pyx_n_s_orientation) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[233], &__pyx_n_s_os) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[234], &__pyx_n_s_out) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[235], &__pyx_n_s_p) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[236], &__pyx_n_s_padh) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[237], &__pyx_n_s_padw) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[238], &__pyx_n_s_padx) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[239], &__pyx_n_s_pady) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[240], &__pyx_n_s_parent) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[241], &__pyx_n_s_parents) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[242], &__pyx_n_s_path) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[243], &__pyx_n_s_pathlib) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[244], &__pyx_n_s_paths) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[245], &__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[246], &__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[247], &__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[248], &__pyx_kp_s_pdf_toolbox_lib_dia_yolov5_utils_4) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[249], &__pyx_n_u_png) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[250], &__pyx_n_s_prepare) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[251], &__pyx_n_s_print) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[252], &__pyx_n_s_qualname) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[253], &__pyx_n_s_r) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[254], &__pyx_n_s_random) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[255], &__pyx_n_s_ravel) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[256], &__pyx_n_s_read) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[257], &__pyx_n_s_recursive) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[258], &__pyx_n_s_relative_to) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[259], &__pyx_n_s_release) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[260], &__pyx_n_s_repeat) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[261], &__pyx_n_s_reshape) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[262], &__pyx_n_s_resize) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[263], &__pyx_n_s_resolve) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[264], &__pyx_n_s_ret_val) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[265], &__pyx_n_s_rglob) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[266], &__pyx_n_s_rmtree) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[267], &__pyx_n_s_rotation) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[268], &__pyx_n_s_rsplit) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[269], &__pyx_n_s_s) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[270], &__pyx_n_s_sa) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[271], &__pyx_n_s_sb) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[272], &__pyx_n_s_seed) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[273], &__pyx_n_s_segments) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[274], &__pyx_n_s_segments4) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[275], &__pyx_n_s_segments9) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[276], &__pyx_n_s_self) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[277], &__pyx_n_s_send) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[278], &__pyx_n_s_sep) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[279], &__pyx_n_s_set_name) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[280], &__pyx_n_s_shape) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[281], &__pyx_n_s_shuffle) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[282], &__pyx_n_s_shutil) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[283], &__pyx_n_s_size) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[284], &__pyx_n_s_spec) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[285], &__pyx_n_s_split) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[286], &__pyx_n_s_splitlines) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[287], &__pyx_n_s_stem) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[288], &__pyx_n_s_stride) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[289], &__pyx_n_s_strip) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[290], &__pyx_n_s_suffix) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[291], &__pyx_n_s_sum) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[292], &__pyx_n_s_super) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[293], &__pyx_n_s_test) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[294], &__pyx_n_s_threading) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[295], &__pyx_n_s_throw) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[296], &__pyx_n_u_tif) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[297], &__pyx_n_u_tiff) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[298], &__pyx_n_s_time) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[299], &__pyx_n_s_tobytes) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[300], &__pyx_n_s_torch) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[301], &__pyx_n_s_torch_nn_functional) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[302], &__pyx_n_s_total) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[303], &__pyx_n_s_tqdm) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[304], &__pyx_n_s_transpose) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[305], &__pyx_kp_u_txt) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[306], &__pyx_n_s_txt_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[307], &__pyx_n_s_uint8) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[308], &__pyx_n_s_uniform) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[309], &__pyx_n_s_unlink) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[310], &__pyx_n_s_update) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[311], &__pyx_kp_u_using_txt_labeled_images_only) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[312], &__pyx_n_u_video) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[313], &__pyx_kp_u_video_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[314], &__pyx_n_s_video_flag) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[315], &__pyx_kp_u_videos) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[316], &__pyx_n_s_videos_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[317], &__pyx_n_s_w) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[318], &__pyx_n_s_w0) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[319], &__pyx_n_u_webp) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[320], &__pyx_n_s_weights) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[321], &__pyx_n_u_wmv) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[322], &__pyx_n_s_wp) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[323], &__pyx_n_s_write) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[324], &__pyx_n_s_x) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[325], &__pyx_n_s_x1) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[326], &__pyx_n_s_x1a) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[327], &__pyx_n_s_x1b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[328], &__pyx_n_s_x2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[329], &__pyx_n_s_x2a) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[330], &__pyx_n_s_x2b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[331], &__pyx_n_s_xc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[332], &__pyx_n_s_xyn2xy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[333], &__pyx_n_s_xywh2xyxy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[334], &__pyx_n_s_xywhn2xyxy) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[335], &__pyx_n_s_y1) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[336], &__pyx_n_s_y1a) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[337], &__pyx_n_s_y1b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[338], &__pyx_n_s_y2) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[339], &__pyx_n_s_y2a) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[340], &__pyx_n_s_y2b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[341], &__pyx_n_s_yaml) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[342], &__pyx_n_s_yc) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[343], &__pyx_n_s_zip) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - if (__Pyx_InitString(__pyx_string_tab[344], &__pyx_n_s_zipfile) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - #endif - #if !CYTHON_USE_MODULE_STATE - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - #endif - __pyx_float_0_0 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_float_0_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_0_1 = PyFloat_FromDouble(0.1); if (unlikely(!__pyx_float_0_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_0_9 = PyFloat_FromDouble(0.9); if (unlikely(!__pyx_float_0_9)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_1_2 = PyFloat_FromDouble(1.2); if (unlikely(!__pyx_float_1_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_3 = PyInt_FromLong(3); if (unlikely(!__pyx_int_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_4 = PyInt_FromLong(4); if (unlikely(!__pyx_int_4)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_5 = PyInt_FromLong(5); if (unlikely(!__pyx_int_5)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_6 = PyInt_FromLong(6); if (unlikely(!__pyx_int_6)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_7 = PyInt_FromLong(7); if (unlikely(!__pyx_int_7)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_8 = PyInt_FromLong(8); if (unlikely(!__pyx_int_8)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_32 = PyInt_FromLong(32); if (unlikely(!__pyx_int_32)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_114 = PyInt_FromLong(114); if (unlikely(!__pyx_int_114)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_274 = PyInt_FromLong(274); if (unlikely(!__pyx_int_274)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_640 = PyInt_FromLong(640); if (unlikely(!__pyx_int_640)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: init_globals ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - return 0; -} -/* #### Code section: init_module ### */ - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash)) __PYX_ERR(0, 42, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash) < 0) __PYX_ERR(0, 42, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash) < 0) __PYX_ERR(0, 42, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct__get_hash->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr)) __PYX_ERR(0, 44, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr) < 0) __PYX_ERR(0, 44, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr) < 0) __PYX_ERR(0, 44, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_1_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic)) __PYX_ERR(0, 200, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic) < 0) __PYX_ERR(0, 200, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic) < 0) __PYX_ERR(0, 200, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_2_load_mosaic->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr)) __PYX_ERR(0, 204, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr) < 0) __PYX_ERR(0, 204, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr) < 0) __PYX_ERR(0, 204, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_3_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9)) __PYX_ERR(0, 247, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9) < 0) __PYX_ERR(0, 247, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9 = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9) < 0) __PYX_ERR(0, 247, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_4_load_mosaic9->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr)) __PYX_ERR(0, 280, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr) < 0) __PYX_ERR(0, 280, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr) < 0) __PYX_ERR(0, 280, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_5_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr)) __PYX_ERR(0, 295, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr) < 0) __PYX_ERR(0, 295, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr) < 0) __PYX_ERR(0, 295, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_6_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit)) __PYX_ERR(0, 362, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit) < 0) __PYX_ERR(0, 362, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit) < 0) __PYX_ERR(0, 362, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_7_autosplit->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr)) __PYX_ERR(0, 371, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr_spec, __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr) < 0) __PYX_ERR(0, 371, __pyx_L1_error) - #else - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr = &__pyx_type_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr) < 0) __PYX_ERR(0, 371, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr->tp_dictoffset && __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets___pyx_scope_struct_8_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_datasets(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_datasets}, - {0, NULL} -}; -#endif - -#ifdef __cplusplus -namespace { - struct PyModuleDef __pyx_moduledef = - #else - static struct PyModuleDef __pyx_moduledef = - #endif - { - PyModuleDef_HEAD_INIT, - "datasets", - __pyx_k_Dataloaders_and_dataset_utils, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #elif CYTHON_USE_MODULE_STATE - sizeof(__pyx_mstate), /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - #if CYTHON_USE_MODULE_STATE - __pyx_m_traverse, /* m_traverse */ - __pyx_m_clear, /* m_clear */ - NULL /* m_free */ - #else - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ - #endif - }; - #ifdef __cplusplus -} /* anonymous namespace */ -#endif -#endif - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initdatasets(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initdatasets(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_datasets(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_datasets(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *module, const char* from_name, const char* to_name, int allow_none) -#else -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) -#endif -{ - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { -#if CYTHON_COMPILING_IN_LIMITED_API - result = PyModule_AddObject(module, to_name, value); -#else - result = PyDict_SetItemString(moddict, to_name, value); -#endif - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - CYTHON_UNUSED_VAR(def); - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; -#if CYTHON_COMPILING_IN_LIMITED_API - moddict = module; -#else - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; -#endif - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_datasets(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - int stringtab_initialized = 0; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'datasets' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("datasets", __pyx_methods, __pyx_k_Dataloaders_and_dataset_utils, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #elif CYTHON_COMPILING_IN_LIMITED_API - __pyx_t_1 = PyModule_Create(&__pyx_moduledef); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - { - int add_module_result = PyState_AddModule(__pyx_t_1, &__pyx_moduledef); - Py_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely((add_module_result < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - } - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #endif - CYTHON_UNUSED_VAR(__pyx_t_1); - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_datasets(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - stringtab_initialized = 1; - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_pdf_toolbox__lib__dia_yolov5__utils__datasets) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "pdf_toolbox.lib.dia_yolov5.utils.datasets")) { - if (unlikely((PyDict_SetItemString(modules, "pdf_toolbox.lib.dia_yolov5.utils.datasets", __pyx_m) < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely((__Pyx_modinit_type_init_code() < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":6 - * """ - * - * import glob # <<<<<<<<<<<<<< - * import hashlib - * import json - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_glob, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_glob, __pyx_t_2) < 0) __PYX_ERR(0, 6, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":7 - * - * import glob - * import hashlib # <<<<<<<<<<<<<< - * import json - * import math - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_hashlib, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_hashlib, __pyx_t_2) < 0) __PYX_ERR(0, 7, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":8 - * import glob - * import hashlib - * import json # <<<<<<<<<<<<<< - * import math - * import os - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_json, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_json, __pyx_t_2) < 0) __PYX_ERR(0, 8, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":9 - * import hashlib - * import json - * import math # <<<<<<<<<<<<<< - * import os - * import random - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_math, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_math, __pyx_t_2) < 0) __PYX_ERR(0, 9, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":10 - * import json - * import math - * import os # <<<<<<<<<<<<<< - * import random - * import shutil - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_os, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 10, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_os, __pyx_t_2) < 0) __PYX_ERR(0, 10, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":11 - * import math - * import os - * import random # <<<<<<<<<<<<<< - * import shutil - * import time - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_random, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 11, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_random, __pyx_t_2) < 0) __PYX_ERR(0, 11, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":12 - * import os - * import random - * import shutil # <<<<<<<<<<<<<< - * import time - * from itertools import repeat - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_shutil, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_shutil, __pyx_t_2) < 0) __PYX_ERR(0, 12, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":13 - * import random - * import shutil - * import time # <<<<<<<<<<<<<< - * from itertools import repeat - * from multiprocessing.pool import Pool, ThreadPool - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_time, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_time, __pyx_t_2) < 0) __PYX_ERR(0, 13, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":14 - * import shutil - * import time - * from itertools import repeat # <<<<<<<<<<<<<< - * from multiprocessing.pool import Pool, ThreadPool - * from pathlib import Path - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_repeat); - __Pyx_GIVEREF(__pyx_n_s_repeat); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_repeat); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_itertools, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_repeat); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_repeat, __pyx_t_2) < 0) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":15 - * import time - * from itertools import repeat - * from multiprocessing.pool import Pool, ThreadPool # <<<<<<<<<<<<<< - * from pathlib import Path - * from threading import Thread - */ - __pyx_t_3 = PyList_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_s_Pool); - __Pyx_GIVEREF(__pyx_n_s_Pool); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_Pool); - __Pyx_INCREF(__pyx_n_s_ThreadPool); - __Pyx_GIVEREF(__pyx_n_s_ThreadPool); - PyList_SET_ITEM(__pyx_t_3, 1, __pyx_n_s_ThreadPool); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_multiprocessing_pool, __pyx_t_3, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_Pool); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Pool, __pyx_t_3) < 0) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_ThreadPool); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_ThreadPool, __pyx_t_3) < 0) __PYX_ERR(0, 15, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":16 - * from itertools import repeat - * from multiprocessing.pool import Pool, ThreadPool - * from pathlib import Path # <<<<<<<<<<<<<< - * from threading import Thread - * from zipfile import ZipFile - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 16, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_Path); - __Pyx_GIVEREF(__pyx_n_s_Path); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_Path); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pathlib, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 16, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_Path); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 16, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Path, __pyx_t_2) < 0) __PYX_ERR(0, 16, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":17 - * from multiprocessing.pool import Pool, ThreadPool - * from pathlib import Path - * from threading import Thread # <<<<<<<<<<<<<< - * from zipfile import ZipFile - * - */ - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_s_Thread); - __Pyx_GIVEREF(__pyx_n_s_Thread); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_Thread); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_threading, __pyx_t_3, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_Thread); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Thread, __pyx_t_3) < 0) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":18 - * from pathlib import Path - * from threading import Thread - * from zipfile import ZipFile # <<<<<<<<<<<<<< - * - * import cv2 - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_ZipFile); - __Pyx_GIVEREF(__pyx_n_s_ZipFile); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_ZipFile); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_zipfile, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_ZipFile); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_ZipFile, __pyx_t_2) < 0) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":20 - * from zipfile import ZipFile - * - * import cv2 # <<<<<<<<<<<<<< - * import numpy as np - * import torch - */ - __pyx_t_3 = __Pyx_ImportDottedModule(__pyx_n_s_cv2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 20, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_cv2, __pyx_t_3) < 0) __PYX_ERR(0, 20, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":21 - * - * import cv2 - * import numpy as np # <<<<<<<<<<<<<< - * import torch - * import torch.nn.functional as F - */ - __pyx_t_3 = __Pyx_ImportDottedModule(__pyx_n_s_numpy, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 21, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_3) < 0) __PYX_ERR(0, 21, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":22 - * import cv2 - * import numpy as np - * import torch # <<<<<<<<<<<<<< - * import torch.nn.functional as F - * import yaml - */ - __pyx_t_3 = __Pyx_ImportDottedModule(__pyx_n_s_torch, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 22, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_torch, __pyx_t_3) < 0) __PYX_ERR(0, 22, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":23 - * import numpy as np - * import torch - * import torch.nn.functional as F # <<<<<<<<<<<<<< - * import yaml - * from PIL import ExifTags, Image, ImageOps - */ - __pyx_t_3 = __Pyx_ImportDottedModule(__pyx_n_s_torch_nn_functional, __pyx_tuple__27); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_F, __pyx_t_3) < 0) __PYX_ERR(0, 23, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":24 - * import torch - * import torch.nn.functional as F - * import yaml # <<<<<<<<<<<<<< - * from PIL import ExifTags, Image, ImageOps - * from tqdm import tqdm - */ - __pyx_t_3 = __Pyx_ImportDottedModule(__pyx_n_s_yaml, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_yaml, __pyx_t_3) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":25 - * import torch.nn.functional as F - * import yaml - * from PIL import ExifTags, Image, ImageOps # <<<<<<<<<<<<<< - * from tqdm import tqdm - * - */ - __pyx_t_3 = PyList_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_s_ExifTags); - __Pyx_GIVEREF(__pyx_n_s_ExifTags); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_ExifTags); - __Pyx_INCREF(__pyx_n_s_Image); - __Pyx_GIVEREF(__pyx_n_s_Image); - PyList_SET_ITEM(__pyx_t_3, 1, __pyx_n_s_Image); - __Pyx_INCREF(__pyx_n_s_ImageOps); - __Pyx_GIVEREF(__pyx_n_s_ImageOps); - PyList_SET_ITEM(__pyx_t_3, 2, __pyx_n_s_ImageOps); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_PIL, __pyx_t_3, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_ExifTags); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_ExifTags, __pyx_t_3) < 0) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_Image); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Image, __pyx_t_3) < 0) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_ImageOps); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_ImageOps, __pyx_t_3) < 0) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":26 - * import yaml - * from PIL import ExifTags, Image, ImageOps - * from tqdm import tqdm # <<<<<<<<<<<<<< - * - * from pdf_toolbox.lib.dia_yolov5.utils.augmentations import letterbox - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 26, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_tqdm); - __Pyx_GIVEREF(__pyx_n_s_tqdm); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_tqdm); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_tqdm, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 26, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_tqdm); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 26, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_tqdm, __pyx_t_2) < 0) __PYX_ERR(0, 26, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":28 - * from tqdm import tqdm - * - * from pdf_toolbox.lib.dia_yolov5.utils.augmentations import letterbox # <<<<<<<<<<<<<< - * from pdf_toolbox.lib.dia_yolov5.utils.general import (xyn2xy, xywh2xyxy, xywhn2xyxy) - * - */ - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_s_letterbox); - __Pyx_GIVEREF(__pyx_n_s_letterbox); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_letterbox); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_2, __pyx_t_3, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_letterbox); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_letterbox, __pyx_t_3) < 0) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":29 - * - * from pdf_toolbox.lib.dia_yolov5.utils.augmentations import letterbox - * from pdf_toolbox.lib.dia_yolov5.utils.general import (xyn2xy, xywh2xyxy, xywhn2xyxy) # <<<<<<<<<<<<<< - * - * # Parameters - */ - __pyx_t_2 = PyList_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_xyn2xy); - __Pyx_GIVEREF(__pyx_n_s_xyn2xy); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_xyn2xy); - __Pyx_INCREF(__pyx_n_s_xywh2xyxy); - __Pyx_GIVEREF(__pyx_n_s_xywh2xyxy); - PyList_SET_ITEM(__pyx_t_2, 1, __pyx_n_s_xywh2xyxy); - __Pyx_INCREF(__pyx_n_s_xywhn2xyxy); - __Pyx_GIVEREF(__pyx_n_s_xywhn2xyxy); - PyList_SET_ITEM(__pyx_t_2, 2, __pyx_n_s_xywhn2xyxy); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils_3, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_xyn2xy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_xyn2xy, __pyx_t_2) < 0) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_xywh2xyxy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_xywh2xyxy, __pyx_t_2) < 0) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_xywhn2xyxy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_xywhn2xyxy, __pyx_t_2) < 0) __PYX_ERR(0, 29, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":32 - * - * # Parameters - * HELP_URL = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' # <<<<<<<<<<<<<< - * IMG_FORMATS = ['bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp'] # include image suffixes - * VID_FORMATS = ['asf', 'avi', 'gif', 'm4v', 'mkv', 'mov', 'mp4', 'mpeg', 'mpg', 'wmv'] # include video suffixes - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_HELP_URL, __pyx_kp_u_https_github_com_ultralytics_yol) < 0) __PYX_ERR(0, 32, __pyx_L1_error) - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":33 - * # Parameters - * HELP_URL = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' - * IMG_FORMATS = ['bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp'] # include image suffixes # <<<<<<<<<<<<<< - * VID_FORMATS = ['asf', 'avi', 'gif', 'm4v', 'mkv', 'mov', 'mp4', 'mpeg', 'mpg', 'wmv'] # include video suffixes - * - */ - __pyx_t_3 = PyList_New(9); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_u_bmp); - __Pyx_GIVEREF(__pyx_n_u_bmp); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_u_bmp); - __Pyx_INCREF(__pyx_n_u_dng); - __Pyx_GIVEREF(__pyx_n_u_dng); - PyList_SET_ITEM(__pyx_t_3, 1, __pyx_n_u_dng); - __Pyx_INCREF(__pyx_n_u_jpeg); - __Pyx_GIVEREF(__pyx_n_u_jpeg); - PyList_SET_ITEM(__pyx_t_3, 2, __pyx_n_u_jpeg); - __Pyx_INCREF(__pyx_n_u_jpg_2); - __Pyx_GIVEREF(__pyx_n_u_jpg_2); - PyList_SET_ITEM(__pyx_t_3, 3, __pyx_n_u_jpg_2); - __Pyx_INCREF(__pyx_n_u_mpo); - __Pyx_GIVEREF(__pyx_n_u_mpo); - PyList_SET_ITEM(__pyx_t_3, 4, __pyx_n_u_mpo); - __Pyx_INCREF(__pyx_n_u_png); - __Pyx_GIVEREF(__pyx_n_u_png); - PyList_SET_ITEM(__pyx_t_3, 5, __pyx_n_u_png); - __Pyx_INCREF(__pyx_n_u_tif); - __Pyx_GIVEREF(__pyx_n_u_tif); - PyList_SET_ITEM(__pyx_t_3, 6, __pyx_n_u_tif); - __Pyx_INCREF(__pyx_n_u_tiff); - __Pyx_GIVEREF(__pyx_n_u_tiff); - PyList_SET_ITEM(__pyx_t_3, 7, __pyx_n_u_tiff); - __Pyx_INCREF(__pyx_n_u_webp); - __Pyx_GIVEREF(__pyx_n_u_webp); - PyList_SET_ITEM(__pyx_t_3, 8, __pyx_n_u_webp); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_IMG_FORMATS, __pyx_t_3) < 0) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":34 - * HELP_URL = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' - * IMG_FORMATS = ['bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp'] # include image suffixes - * VID_FORMATS = ['asf', 'avi', 'gif', 'm4v', 'mkv', 'mov', 'mp4', 'mpeg', 'mpg', 'wmv'] # include video suffixes # <<<<<<<<<<<<<< - * - * # Get orientation exif tag - */ - __pyx_t_3 = PyList_New(10); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_u_asf); - __Pyx_GIVEREF(__pyx_n_u_asf); - PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_u_asf); - __Pyx_INCREF(__pyx_n_u_avi); - __Pyx_GIVEREF(__pyx_n_u_avi); - PyList_SET_ITEM(__pyx_t_3, 1, __pyx_n_u_avi); - __Pyx_INCREF(__pyx_n_u_gif); - __Pyx_GIVEREF(__pyx_n_u_gif); - PyList_SET_ITEM(__pyx_t_3, 2, __pyx_n_u_gif); - __Pyx_INCREF(__pyx_n_u_m4v); - __Pyx_GIVEREF(__pyx_n_u_m4v); - PyList_SET_ITEM(__pyx_t_3, 3, __pyx_n_u_m4v); - __Pyx_INCREF(__pyx_n_u_mkv); - __Pyx_GIVEREF(__pyx_n_u_mkv); - PyList_SET_ITEM(__pyx_t_3, 4, __pyx_n_u_mkv); - __Pyx_INCREF(__pyx_n_u_mov); - __Pyx_GIVEREF(__pyx_n_u_mov); - PyList_SET_ITEM(__pyx_t_3, 5, __pyx_n_u_mov); - __Pyx_INCREF(__pyx_n_u_mp4); - __Pyx_GIVEREF(__pyx_n_u_mp4); - PyList_SET_ITEM(__pyx_t_3, 6, __pyx_n_u_mp4); - __Pyx_INCREF(__pyx_n_u_mpeg); - __Pyx_GIVEREF(__pyx_n_u_mpeg); - PyList_SET_ITEM(__pyx_t_3, 7, __pyx_n_u_mpeg); - __Pyx_INCREF(__pyx_n_u_mpg); - __Pyx_GIVEREF(__pyx_n_u_mpg); - PyList_SET_ITEM(__pyx_t_3, 8, __pyx_n_u_mpg); - __Pyx_INCREF(__pyx_n_u_wmv); - __Pyx_GIVEREF(__pyx_n_u_wmv); - PyList_SET_ITEM(__pyx_t_3, 9, __pyx_n_u_wmv); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_VID_FORMATS, __pyx_t_3) < 0) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":37 - * - * # Get orientation exif tag - * for orientation in ExifTags.TAGS.keys(): # <<<<<<<<<<<<<< - * if ExifTags.TAGS[orientation] == 'Orientation': - * break - */ - __pyx_t_4 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_ExifTags); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_TAGS); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(__pyx_t_7 == Py_None)) { - PyErr_Format(PyExc_AttributeError, "'NoneType' object has no attribute '%.30s'", "keys"); - __PYX_ERR(0, 37, __pyx_L1_error) - } - __pyx_t_2 = __Pyx_dict_iterator(__pyx_t_7, 0, __pyx_n_s_keys, (&__pyx_t_5), (&__pyx_t_6)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_3); - __pyx_t_3 = __pyx_t_2; - __pyx_t_2 = 0; - while (1) { - __pyx_t_8 = __Pyx_dict_iter_next(__pyx_t_3, __pyx_t_5, &__pyx_t_4, &__pyx_t_2, NULL, NULL, __pyx_t_6); - if (unlikely(__pyx_t_8 == 0)) break; - if (unlikely(__pyx_t_8 == -1)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_orientation, __pyx_t_2) < 0) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":38 - * # Get orientation exif tag - * for orientation in ExifTags.TAGS.keys(): - * if ExifTags.TAGS[orientation] == 'Orientation': # <<<<<<<<<<<<<< - * break - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_ExifTags); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_TAGS); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_orientation); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_9 = __Pyx_PyObject_GetItem(__pyx_t_7, __pyx_t_2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_10 = (__Pyx_PyUnicode_Equals(__pyx_t_9, __pyx_n_u_Orientation, Py_EQ)); if (unlikely((__pyx_t_10 < 0))) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (__pyx_t_10) { - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":39 - * for orientation in ExifTags.TAGS.keys(): - * if ExifTags.TAGS[orientation] == 'Orientation': - * break # <<<<<<<<<<<<<< - * - * - */ - goto __pyx_L3_break; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":38 - * # Get orientation exif tag - * for orientation in ExifTags.TAGS.keys(): - * if ExifTags.TAGS[orientation] == 'Orientation': # <<<<<<<<<<<<<< - * break - * - */ - } - } - __pyx_L3_break:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":42 - * - * - * def get_hash(paths): # <<<<<<<<<<<<<< - * # Returns a single hash value of a list of paths (files or dirs) - * size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_1get_hash, 0, __pyx_n_s_get_hash, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__29)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_get_hash, __pyx_t_3) < 0) __PYX_ERR(0, 42, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":50 - * - * - * def exif_size(img): # <<<<<<<<<<<<<< - * # Returns exif-corrected PIL size - * s = img.size # (width, height) - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_3exif_size, 0, __pyx_n_s_exif_size, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__31)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_exif_size, __pyx_t_3) < 0) __PYX_ERR(0, 50, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":65 - * - * - * def exif_transpose(image): # <<<<<<<<<<<<<< - * """ - * Transpose a PIL image accordingly if it has an EXIF Orientation tag. - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_5exif_transpose, 0, __pyx_n_s_exif_transpose, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__33)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_exif_transpose, __pyx_t_3) < 0) __PYX_ERR(0, 65, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":90 - * return image - * - * class LoadImages: # <<<<<<<<<<<<<< - * # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4` - * def __init__(self, path, img_size=640, stride=32, auto=True): - */ - __pyx_t_3 = __Pyx_Py3MetaclassPrepare((PyObject *) NULL, __pyx_empty_tuple, __pyx_n_s_LoadImages, __pyx_n_s_LoadImages, (PyObject *) NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, (PyObject *) NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 90, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":92 - * class LoadImages: - * # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4` - * def __init__(self, path, img_size=640, stride=32, auto=True): # <<<<<<<<<<<<<< - * p = str(Path(path).resolve()) # os-agnostic absolute path - * if '*' in p: - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_1__init__, 0, __pyx_n_s_LoadImages___init, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__35)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 92, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_9, __pyx_tuple__36); - if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_init, __pyx_t_9) < 0) __PYX_ERR(0, 92, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":121 - * f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' - * - * def __iter__(self): # <<<<<<<<<<<<<< - * self.count = 0 - * return self - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_3__iter__, 0, __pyx_n_s_LoadImages___iter, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__38)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 121, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_iter, __pyx_t_9) < 0) __PYX_ERR(0, 121, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":125 - * return self - * - * def __next__(self): # <<<<<<<<<<<<<< - * if self.count == self.nf: - * raise StopIteration - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_5__next__, 0, __pyx_n_s_LoadImages___next, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__40)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 125, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_next, __pyx_t_9) < 0) __PYX_ERR(0, 125, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":163 - * return path, img, img0, self.cap, s - * - * def new_video(self, path): # <<<<<<<<<<<<<< - * self.frame = 0 - * self.cap = cv2.VideoCapture(path) - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_7new_video, 0, __pyx_n_s_LoadImages_new_video, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__42)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_new_video, __pyx_t_9) < 0) __PYX_ERR(0, 163, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":168 - * self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self.nf # number of files - * - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_10LoadImages_9__len__, 0, __pyx_n_s_LoadImages___len, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__43)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 168, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_len, __pyx_t_9) < 0) __PYX_ERR(0, 168, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":90 - * return image - * - * class LoadImages: # <<<<<<<<<<<<<< - * # YOLOv5 image/video dataloader, i.e. `python detect.py --source image.jpg/vid.mp4` - * def __init__(self, path, img_size=640, stride=32, auto=True): - */ - __pyx_t_9 = __Pyx_Py3ClassCreate(((PyObject*)&PyType_Type), __pyx_n_s_LoadImages, __pyx_empty_tuple, __pyx_t_3, NULL, 0, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 90, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_LoadImages, __pyx_t_9) < 0) __PYX_ERR(0, 90, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":172 - * - * - * def img2label_paths(img_paths): # <<<<<<<<<<<<<< - * # Define label paths as a function of image paths - * sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_7img2label_paths, 0, __pyx_n_s_img2label_paths, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__45)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_img2label_paths, __pyx_t_3) < 0) __PYX_ERR(0, 172, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":179 - * - * # Ancillary functions -------------------------------------------------------------------------------------------------- - * def load_image(self, i): # <<<<<<<<<<<<<< - * # loads 1 image from dataset index 'i', returns im, original hw, resized hw - * im = self.imgs[i] - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_9load_image, 0, __pyx_n_s_load_image, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__47)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_load_image, __pyx_t_3) < 0) __PYX_ERR(0, 179, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":200 - * - * - * def load_mosaic(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - * labels4, segments4 = [], [] - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_11load_mosaic, 0, __pyx_n_s_load_mosaic, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__49)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 200, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_load_mosaic, __pyx_t_3) < 0) __PYX_ERR(0, 200, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":247 - * - * - * def load_mosaic9(self, index): # <<<<<<<<<<<<<< - * # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - * labels9, segments9 = [], [] - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_13load_mosaic9, 0, __pyx_n_s_load_mosaic9, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__51)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 247, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_load_mosaic9, __pyx_t_3) < 0) __PYX_ERR(0, 247, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":313 - * - * - * def create_folder(path='./new'): # <<<<<<<<<<<<<< - * # Create folder - * if os.path.exists(path): - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_15create_folder, 0, __pyx_n_s_create_folder, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__53)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 313, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_3, __pyx_tuple__54); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_create_folder, __pyx_t_3) < 0) __PYX_ERR(0, 313, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":320 - * - * - * def flatten_recursive(path='../datasets/coco128'): # <<<<<<<<<<<<<< - * # Flatten a recursive directory by bringing all files to top level - * new_path = Path(path + '_flat') - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_17flatten_recursive, 0, __pyx_n_s_flatten_recursive, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__56)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 320, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_3, __pyx_tuple__57); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_flatten_recursive, __pyx_t_3) < 0) __PYX_ERR(0, 320, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":328 - * - * - * def extract_boxes(path='../datasets/coco128'): # from utils.datasets import *; extract_boxes() # <<<<<<<<<<<<<< - * # Convert detection dataset into classification dataset, with one directory per class - * path = Path(path) # images dir - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_19extract_boxes, 0, __pyx_n_s_extract_boxes, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__59)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 328, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_3, __pyx_tuple__60); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_extract_boxes, __pyx_t_3) < 0) __PYX_ERR(0, 328, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":362 - * - * - * def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False): # <<<<<<<<<<<<<< - * """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - * Usage: from utils.datasets import *; autosplit() - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_11pdf_toolbox_3lib_10dia_yolov5_5utils_8datasets_21autosplit, 0, __pyx_n_s_autosplit, NULL, __pyx_n_s_pdf_toolbox_lib_dia_yolov5_utils, __pyx_d, ((PyObject *)__pyx_codeobj__62)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 362, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_3, __pyx_tuple__63); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_autosplit, __pyx_t_3) < 0) __PYX_ERR(0, 362, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "pdf_toolbox/lib/dia_yolov5/utils/datasets.py":1 - * # YOLOv5 by Ultralytics, GPL-3.0 license # <<<<<<<<<<<<<< - * """ - * Dataloaders and dataset utils - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_3) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - if (__pyx_m) { - if (__pyx_d && stringtab_initialized) { - __Pyx_AddTraceback("init pdf_toolbox.lib.dia_yolov5.utils.datasets", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - #if !CYTHON_USE_MODULE_STATE - Py_CLEAR(__pyx_m); - #endif - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init pdf_toolbox.lib.dia_yolov5.utils.datasets"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} -/* #### Code section: cleanup_globals ### */ -/* #### Code section: cleanup_module ### */ -/* #### Code section: main_method ### */ -/* #### Code section: utility_code_pragmas ### */ -#if _MSC_VER -#pragma warning( push ) -/* Warning 4127: conditional expression is constant - * Cython uses constant conditional expressions to allow in inline functions to be optimized at - * compile-time, so this warning is not useful - */ -#pragma warning( disable : 4127 ) -#endif - - - -/* #### Code section: utility_code_def ### */ - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStrNoError(__pyx_b, name); - if (unlikely(!result) && !PyErr_Occurred()) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* TupleAndListFromArray */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE void __Pyx_copy_object_array(PyObject *const *CYTHON_RESTRICT src, PyObject** CYTHON_RESTRICT dest, Py_ssize_t length) { - PyObject *v; - Py_ssize_t i; - for (i = 0; i < length; i++) { - v = dest[i] = src[i]; - Py_INCREF(v); - } -} -static CYTHON_INLINE PyObject * -__Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - Py_INCREF(__pyx_empty_tuple); - return __pyx_empty_tuple; - } - res = PyTuple_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyTupleObject*)res)->ob_item, n); - return res; -} -static CYTHON_INLINE PyObject * -__Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - return PyList_New(0); - } - res = PyList_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyListObject*)res)->ob_item, n); - return res; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* fastcall */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s) -{ - Py_ssize_t i, n = PyTuple_GET_SIZE(kwnames); - for (i = 0; i < n; i++) - { - if (s == PyTuple_GET_ITEM(kwnames, i)) return kwvalues[i]; - } - for (i = 0; i < n; i++) - { - int eq = __Pyx_PyUnicode_Equals(s, PyTuple_GET_ITEM(kwnames, i), Py_EQ); - if (unlikely(eq != 0)) { - if (unlikely(eq < 0)) return NULL; // error - return kwvalues[i]; - } - } - return NULL; // not found (no exception set) -} -#endif - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - int kwds_is_tuple = CYTHON_METH_FASTCALL && likely(PyTuple_Check(kwds)); - while (1) { - if (kwds_is_tuple) { - if (pos >= PyTuple_GET_SIZE(kwds)) break; - key = PyTuple_GET_ITEM(kwds, pos); - value = kwvalues[pos]; - pos++; - } - else - { - if (!PyDict_Next(kwds, &pos, &key, &value)) break; - } - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = ( - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key) - ); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - #if PY_MAJOR_VERSION < 3 - PyErr_Format(PyExc_TypeError, - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseClosureNameError */ -static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname) { - PyErr_Format(PyExc_NameError, "free variable '%s' referenced before assignment in enclosing scope", varname); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#elif CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(!__pyx_m)) { - return NULL; - } - result = PyObject_GetAttr(__pyx_m, name); - if (likely(result)) { - return result; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL && !CYTHON_VECTORCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectFastCall */ -static PyObject* __Pyx_PyObject_FastCall_fallback(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs) { - PyObject *argstuple; - PyObject *result; - size_t i; - argstuple = PyTuple_New((Py_ssize_t)nargs); - if (unlikely(!argstuple)) return NULL; - for (i = 0; i < nargs; i++) { - Py_INCREF(args[i]); - PyTuple_SET_ITEM(argstuple, (Py_ssize_t)i, args[i]); - } - result = __Pyx_PyObject_Call(func, argstuple, kwargs); - Py_DECREF(argstuple); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t _nargs, PyObject *kwargs) { - Py_ssize_t nargs = __Pyx_PyVectorcall_NARGS(_nargs); -#if CYTHON_COMPILING_IN_CPYTHON - if (nargs == 0 && kwargs == NULL) { -#ifdef __Pyx_CyFunction_USED - if (__Pyx_IsCyOrPyCFunction(func)) -#else - if (PyCFunction_Check(func)) -#endif - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { - return __Pyx_PyObject_CallMethO(func, NULL); - } - } - } - else if (nargs == 1 && kwargs == NULL) { - if (PyCFunction_Check(func)) - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, args[0]); - } - } - } -#endif - #if PY_VERSION_HEX < 0x030800B1 - #if CYTHON_FAST_PYCCALL - if (PyCFunction_Check(func)) { - if (kwargs) { - return _PyCFunction_FastCallDict(func, args, nargs, kwargs); - } else { - return _PyCFunction_FastCallKeywords(func, args, nargs, NULL); - } - } - #if PY_VERSION_HEX >= 0x030700A1 - if (!kwargs && __Pyx_IS_TYPE(func, &PyMethodDescr_Type)) { - return _PyMethodDescr_FastCallKeywords(func, args, nargs, NULL); - } - #endif - #endif - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs); - } - #endif - #endif - #if CYTHON_VECTORCALL - vectorcallfunc f = _PyVectorcall_Function(func); - if (f) { - return f(func, args, (size_t)nargs, kwargs); - } - #elif defined(__Pyx_CyFunction_USED) && CYTHON_BACKPORT_VECTORCALL - if (__Pyx_CyFunction_CheckExact(func)) { - __pyx_vectorcallfunc f = __Pyx_CyFunction_func_vectorcall(func); - if (f) return f(func, args, (size_t)nargs, kwargs); - } - #endif - if (nargs == 0) { - return __Pyx_PyObject_Call(func, __pyx_empty_tuple, kwargs); - } - return __Pyx_PyObject_FastCall_fallback(func, args, (size_t)nargs, kwargs); -} - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* pep479 */ -static void __Pyx_Generator_Replace_StopIteration(int in_async_gen) { - PyObject *exc, *val, *tb, *cur_exc; - __Pyx_PyThreadState_declare - #ifdef __Pyx_StopAsyncIteration_USED - int is_async_stopiteration = 0; - #endif - CYTHON_MAYBE_UNUSED_VAR(in_async_gen); - cur_exc = PyErr_Occurred(); - if (likely(!__Pyx_PyErr_GivenExceptionMatches(cur_exc, PyExc_StopIteration))) { - #ifdef __Pyx_StopAsyncIteration_USED - if (in_async_gen && unlikely(__Pyx_PyErr_GivenExceptionMatches(cur_exc, __Pyx_PyExc_StopAsyncIteration))) { - is_async_stopiteration = 1; - } else - #endif - return; - } - __Pyx_PyThreadState_assign - __Pyx_GetException(&exc, &val, &tb); - Py_XDECREF(exc); - Py_XDECREF(val); - Py_XDECREF(tb); - PyErr_SetString(PyExc_RuntimeError, - #ifdef __Pyx_StopAsyncIteration_USED - is_async_stopiteration ? "async generator raised StopAsyncIteration" : - in_async_gen ? "async generator raised StopIteration" : - #endif - "generator raised StopIteration"); -} - -/* PyObjectCallOneArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *args[2] = {NULL, arg}; - return __Pyx_PyObject_FastCall(func, args+1, 1 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* DictGetItem */ -#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY -static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) { - PyObject *value; - value = PyDict_GetItemWithError(d, key); - if (unlikely(!value)) { - if (!PyErr_Occurred()) { - if (unlikely(PyTuple_Check(key))) { - PyObject* args = PyTuple_Pack(1, key); - if (likely(args)) { - PyErr_SetObject(PyExc_KeyError, args); - Py_DECREF(args); - } - } else { - PyErr_SetObject(PyExc_KeyError, key); - } - } - return NULL; - } - Py_INCREF(value); - return value; -} -#endif - -/* PyIntCompare */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, long intval, long inplace) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_UNUSED_VAR(inplace); - if (op1 == op2) { - Py_RETURN_TRUE; - } - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long a = PyInt_AS_LONG(op1); - if (a == b) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - int unequal; - unsigned long uintval; - Py_ssize_t size = Py_SIZE(op1); - const digit* digits = ((PyLongObject*)op1)->ob_digit; - if (intval == 0) { - if (size == 0) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } else if (intval < 0) { - if (size >= 0) - Py_RETURN_FALSE; - intval = -intval; - size = -size; - } else { - if (size <= 0) - Py_RETURN_FALSE; - } - uintval = (unsigned long) intval; -#if PyLong_SHIFT * 4 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 4)) { - unequal = (size != 5) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[4] != ((uintval >> (4 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 3 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 3)) { - unequal = (size != 4) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 2 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 2)) { - unequal = (size != 3) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 1 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 1)) { - unequal = (size != 2) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif - unequal = (size != 1) || (((unsigned long) digits[0]) != (uintval & (unsigned long) PyLong_MASK)); - if (unequal == 0) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - if ((double)a == (double)b) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - return ( - PyObject_RichCompare(op1, op2, Py_EQ)); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (unlikely(!j)) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_subscript) { - PyObject *r, *key = PyInt_FromSsize_t(i); - if (unlikely(!key)) return NULL; - r = mm->mp_subscript(o, key); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return sm->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* UnpackUnboundCMethod */ -static int __Pyx_TryUnpackUnboundCMethod(__Pyx_CachedCFunction* target) { - PyObject *method; - method = __Pyx_PyObject_GetAttrStr(target->type, *target->method_name); - if (unlikely(!method)) - return -1; - target->method = method; -#if CYTHON_COMPILING_IN_CPYTHON - #if PY_MAJOR_VERSION >= 3 - if (likely(__Pyx_TypeCheck(method, &PyMethodDescr_Type))) - #endif - { - PyMethodDescrObject *descr = (PyMethodDescrObject*) method; - target->func = descr->d_method->ml_meth; - target->flag = descr->d_method->ml_flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_STACKLESS); - } -#endif - return 0; -} - -/* CallUnboundCMethod1 */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_CallUnboundCMethod1(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg) { - if (likely(cfunc->func)) { - int flag = cfunc->flag; - if (flag == METH_O) { - return (*(cfunc->func))(self, arg); - } else if ((PY_VERSION_HEX >= 0x030600B1) && flag == METH_FASTCALL) { - if ((PY_VERSION_HEX >= 0x030700A0)) { - return (*(__Pyx_PyCFunctionFast)(void*)(PyCFunction)cfunc->func)(self, &arg, 1); - } else { - return (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)cfunc->func)(self, &arg, 1, NULL); - } - } else if ((PY_VERSION_HEX >= 0x030700A0) && flag == (METH_FASTCALL | METH_KEYWORDS)) { - return (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)cfunc->func)(self, &arg, 1, NULL); - } - } - return __Pyx__CallUnboundCMethod1(cfunc, self, arg); -} -#endif -static PyObject* __Pyx__CallUnboundCMethod1(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg){ - PyObject *args, *result = NULL; - if (unlikely(!cfunc->func && !cfunc->method) && unlikely(__Pyx_TryUnpackUnboundCMethod(cfunc) < 0)) return NULL; -#if CYTHON_COMPILING_IN_CPYTHON - if (cfunc->func && (cfunc->flag & METH_VARARGS)) { - args = PyTuple_New(1); - if (unlikely(!args)) goto bad; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - if (cfunc->flag & METH_KEYWORDS) - result = (*(PyCFunctionWithKeywords)(void*)(PyCFunction)cfunc->func)(self, args, NULL); - else - result = (*cfunc->func)(self, args); - } else { - args = PyTuple_New(2); - if (unlikely(!args)) goto bad; - Py_INCREF(self); - PyTuple_SET_ITEM(args, 0, self); - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 1, arg); - result = __Pyx_PyObject_Call(cfunc->method, args, NULL); - } -#else - args = PyTuple_Pack(2, self, arg); - if (unlikely(!args)) goto bad; - result = __Pyx_PyObject_Call(cfunc->method, args, NULL); -#endif -bad: - Py_XDECREF(args); - return result; -} - -/* CallUnboundCMethod2 */ -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030600B1 -static CYTHON_INLINE PyObject *__Pyx_CallUnboundCMethod2(__Pyx_CachedCFunction *cfunc, PyObject *self, PyObject *arg1, PyObject *arg2) { - if (likely(cfunc->func)) { - PyObject *args[2] = {arg1, arg2}; - if (cfunc->flag == METH_FASTCALL) { - #if PY_VERSION_HEX >= 0x030700A0 - return (*(__Pyx_PyCFunctionFast)(void*)(PyCFunction)cfunc->func)(self, args, 2); - #else - return (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)cfunc->func)(self, args, 2, NULL); - #endif - } - #if PY_VERSION_HEX >= 0x030700A0 - if (cfunc->flag == (METH_FASTCALL | METH_KEYWORDS)) - return (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)cfunc->func)(self, args, 2, NULL); - #endif - } - return __Pyx__CallUnboundCMethod2(cfunc, self, arg1, arg2); -} -#endif -static PyObject* __Pyx__CallUnboundCMethod2(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg1, PyObject* arg2){ - PyObject *args, *result = NULL; - if (unlikely(!cfunc->func && !cfunc->method) && unlikely(__Pyx_TryUnpackUnboundCMethod(cfunc) < 0)) return NULL; -#if CYTHON_COMPILING_IN_CPYTHON - if (cfunc->func && (cfunc->flag & METH_VARARGS)) { - args = PyTuple_New(2); - if (unlikely(!args)) goto bad; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - if (cfunc->flag & METH_KEYWORDS) - result = (*(PyCFunctionWithKeywords)(void*)(PyCFunction)cfunc->func)(self, args, NULL); - else - result = (*cfunc->func)(self, args); - } else { - args = PyTuple_New(3); - if (unlikely(!args)) goto bad; - Py_INCREF(self); - PyTuple_SET_ITEM(args, 0, self); - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 1, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 2, arg2); - result = __Pyx_PyObject_Call(cfunc->method, args, NULL); - } -#else - args = PyTuple_Pack(3, self, arg1, arg2); - if (unlikely(!args)) goto bad; - result = __Pyx_PyObject_Call(cfunc->method, args, NULL); -#endif -bad: - Py_XDECREF(args); - return result; -} - -/* dict_getitem_default */ -static PyObject* __Pyx_PyDict_GetItemDefault(PyObject* d, PyObject* key, PyObject* default_value) { - PyObject* value; -#if PY_MAJOR_VERSION >= 3 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07020000) - value = PyDict_GetItemWithError(d, key); - if (unlikely(!value)) { - if (unlikely(PyErr_Occurred())) - return NULL; - value = default_value; - } - Py_INCREF(value); - if ((1)); -#else - if (PyString_CheckExact(key) || PyUnicode_CheckExact(key) || PyInt_CheckExact(key)) { - value = PyDict_GetItem(d, key); - if (unlikely(!value)) { - value = default_value; - } - Py_INCREF(value); - } -#endif - else { - if (default_value == Py_None) - value = __Pyx_CallUnboundCMethod1(&__pyx_umethod_PyDict_Type_get, d, key); - else - value = __Pyx_CallUnboundCMethod2(&__pyx_umethod_PyDict_Type_get, d, key, default_value); - } - return value; -} - -/* DelItemInt */ -static int __Pyx_DelItem_Generic(PyObject *o, PyObject *j) { - int r; - if (unlikely(!j)) return -1; - r = PyObject_DelItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE int __Pyx_DelItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, CYTHON_NCP_UNUSED int wraparound) { -#if !CYTHON_USE_TYPE_SLOTS - if (is_list || PySequence_Check(o)) { - return PySequence_DelItem(o, i); - } -#else - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if ((!is_list) && mm && mm->mp_ass_subscript) { - PyObject *key = PyInt_FromSsize_t(i); - return likely(key) ? mm->mp_ass_subscript(o, key, (PyObject *)NULL) : -1; - } - if (likely(sm && sm->sq_ass_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return -1; - PyErr_Clear(); - } - } - return sm->sq_ass_item(o, i, (PyObject *)NULL); - } -#endif - return __Pyx_DelItem_Generic(o, PyInt_FromSsize_t(i)); -} - -/* JoinPyUnicode */ -static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - Py_UCS4 max_char) { -#if CYTHON_USE_UNICODE_INTERNALS && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - PyObject *result_uval; - int result_ukind, kind_shift; - Py_ssize_t i, char_pos; - void *result_udata; - CYTHON_MAYBE_UNUSED_VAR(max_char); -#if CYTHON_PEP393_ENABLED - result_uval = PyUnicode_New(result_ulength, max_char); - if (unlikely(!result_uval)) return NULL; - result_ukind = (max_char <= 255) ? PyUnicode_1BYTE_KIND : (max_char <= 65535) ? PyUnicode_2BYTE_KIND : PyUnicode_4BYTE_KIND; - kind_shift = (result_ukind == PyUnicode_4BYTE_KIND) ? 2 : result_ukind - 1; - result_udata = PyUnicode_DATA(result_uval); -#else - result_uval = PyUnicode_FromUnicode(NULL, result_ulength); - if (unlikely(!result_uval)) return NULL; - result_ukind = sizeof(Py_UNICODE); - kind_shift = (result_ukind == 4) ? 2 : result_ukind - 1; - result_udata = PyUnicode_AS_UNICODE(result_uval); -#endif - assert(kind_shift == 2 || kind_shift == 1 || kind_shift == 0); - char_pos = 0; - for (i=0; i < value_count; i++) { - int ukind; - Py_ssize_t ulength; - void *udata; - PyObject *uval = PyTuple_GET_ITEM(value_tuple, i); - if (unlikely(__Pyx_PyUnicode_READY(uval))) - goto bad; - ulength = __Pyx_PyUnicode_GET_LENGTH(uval); - if (unlikely(!ulength)) - continue; - if (unlikely((PY_SSIZE_T_MAX >> kind_shift) - ulength < char_pos)) - goto overflow; - ukind = __Pyx_PyUnicode_KIND(uval); - udata = __Pyx_PyUnicode_DATA(uval); - if (!CYTHON_PEP393_ENABLED || ukind == result_ukind) { - memcpy((char *)result_udata + (char_pos << kind_shift), udata, (size_t) (ulength << kind_shift)); - } else { - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030300F0 || defined(_PyUnicode_FastCopyCharacters) - _PyUnicode_FastCopyCharacters(result_uval, char_pos, uval, 0, ulength); - #else - Py_ssize_t j; - for (j=0; j < ulength; j++) { - Py_UCS4 uchar = __Pyx_PyUnicode_READ(ukind, udata, j); - __Pyx_PyUnicode_WRITE(result_ukind, result_udata, char_pos+j, uchar); - } - #endif - } - char_pos += ulength; - } - return result_uval; -overflow: - PyErr_SetString(PyExc_OverflowError, "join() result is too long for a Python string"); -bad: - Py_DECREF(result_uval); - return NULL; -#else - CYTHON_UNUSED_VAR(max_char); - CYTHON_UNUSED_VAR(result_ulength); - CYTHON_UNUSED_VAR(value_count); - return PyUnicode_Join(__pyx_empty_unicode, value_tuple); -#endif -} - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - __Pyx_PyThreadState_declare - CYTHON_UNUSED_VAR(cause); - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#else - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* PyObjectSetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_setattro)) - return tp->tp_setattro(obj, attr_name, value); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_setattr)) - return tp->tp_setattr(obj, PyString_AS_STRING(attr_name), value); -#endif - return PyObject_SetAttr(obj, attr_name, value); -} -#endif - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject *index) { - PyObject *runerr; - Py_ssize_t key_value; - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - __Pyx_TypeName index_type_name = __Pyx_PyType_GetName(Py_TYPE(index)); - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, - "cannot fit '" __Pyx_FMT_TYPENAME "' into an index-sized integer", index_type_name); - __Pyx_DECREF_TypeName(index_type_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem_Slow(PyObject *obj, PyObject *key) { - __Pyx_TypeName obj_type_name; - if (likely(PyType_Check(obj))) { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(obj, __pyx_n_s_class_getitem); - if (meth) { - PyObject *result = __Pyx_PyObject_CallOneArg(meth, key); - Py_DECREF(meth); - return result; - } - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object is not subscriptable", obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key) { - PyTypeObject *tp = Py_TYPE(obj); - PyMappingMethods *mm = tp->tp_as_mapping; - PySequenceMethods *sm = tp->tp_as_sequence; - if (likely(mm && mm->mp_subscript)) { - return mm->mp_subscript(obj, key); - } - if (likely(sm && sm->sq_item)) { - return __Pyx_PyObject_GetIndex(obj, key); - } - return __Pyx_PyObject_GetItem_Slow(obj, key); -} -#endif - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* IterFinish */ -static CYTHON_INLINE int __Pyx_IterFinish(void) { -#if CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* exc_type = tstate->curexc_type; - if (unlikely(exc_type)) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) { - PyObject *exc_value, *exc_tb; - exc_value = tstate->curexc_value; - exc_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - Py_DECREF(exc_type); - Py_XDECREF(exc_value); - Py_XDECREF(exc_tb); - return 0; - } else { - return -1; - } - } - return 0; -#else - if (unlikely(PyErr_Occurred())) { - if (likely(PyErr_ExceptionMatches(PyExc_StopIteration))) { - PyErr_Clear(); - return 0; - } else { - return -1; - } - } - return 0; -#endif -} - -/* UnpackItemEndCheck */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected) { - if (unlikely(retval)) { - Py_DECREF(retval); - __Pyx_RaiseTooManyValuesError(expected); - return -1; - } - return __Pyx_IterFinish(); -} - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long x; - long a = PyInt_AS_LONG(op1); - - x = (long)((unsigned long)a + b); - if (likely((x^a) >= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (unlikely(size == 0)) { - return __Pyx_NewRef(op2); - } - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - double result; - - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* SliceObject */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetSlice(PyObject* obj, - Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** _py_start, PyObject** _py_stop, PyObject** _py_slice, - int has_cstart, int has_cstop, int wraparound) { - __Pyx_TypeName obj_type_name; -#if CYTHON_USE_TYPE_SLOTS - PyMappingMethods* mp; -#if PY_MAJOR_VERSION < 3 - PySequenceMethods* ms = Py_TYPE(obj)->tp_as_sequence; - if (likely(ms && ms->sq_slice)) { - if (!has_cstart) { - if (_py_start && (*_py_start != Py_None)) { - cstart = __Pyx_PyIndex_AsSsize_t(*_py_start); - if ((cstart == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstart = 0; - } - if (!has_cstop) { - if (_py_stop && (*_py_stop != Py_None)) { - cstop = __Pyx_PyIndex_AsSsize_t(*_py_stop); - if ((cstop == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstop = PY_SSIZE_T_MAX; - } - if (wraparound && unlikely((cstart < 0) | (cstop < 0)) && likely(ms->sq_length)) { - Py_ssize_t l = ms->sq_length(obj); - if (likely(l >= 0)) { - if (cstop < 0) { - cstop += l; - if (cstop < 0) cstop = 0; - } - if (cstart < 0) { - cstart += l; - if (cstart < 0) cstart = 0; - } - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - goto bad; - PyErr_Clear(); - } - } - return ms->sq_slice(obj, cstart, cstop); - } -#else - CYTHON_UNUSED_VAR(wraparound); -#endif - mp = Py_TYPE(obj)->tp_as_mapping; - if (likely(mp && mp->mp_subscript)) -#else - CYTHON_UNUSED_VAR(wraparound); -#endif - { - PyObject* result; - PyObject *py_slice, *py_start, *py_stop; - if (_py_slice) { - py_slice = *_py_slice; - } else { - PyObject* owned_start = NULL; - PyObject* owned_stop = NULL; - if (_py_start) { - py_start = *_py_start; - } else { - if (has_cstart) { - owned_start = py_start = PyInt_FromSsize_t(cstart); - if (unlikely(!py_start)) goto bad; - } else - py_start = Py_None; - } - if (_py_stop) { - py_stop = *_py_stop; - } else { - if (has_cstop) { - owned_stop = py_stop = PyInt_FromSsize_t(cstop); - if (unlikely(!py_stop)) { - Py_XDECREF(owned_start); - goto bad; - } - } else - py_stop = Py_None; - } - py_slice = PySlice_New(py_start, py_stop, Py_None); - Py_XDECREF(owned_start); - Py_XDECREF(owned_stop); - if (unlikely(!py_slice)) goto bad; - } -#if CYTHON_USE_TYPE_SLOTS - result = mp->mp_subscript(obj, py_slice); -#else - result = PyObject_GetItem(obj, py_slice); -#endif - if (!_py_slice) { - Py_DECREF(py_slice); - } - return result; - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object is unsliceable", obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); -bad: - return NULL; -} - -/* PyIntCompare */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_NeObjC(PyObject *op1, PyObject *op2, long intval, long inplace) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_UNUSED_VAR(inplace); - if (op1 == op2) { - Py_RETURN_FALSE; - } - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long a = PyInt_AS_LONG(op1); - if (a != b) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - int unequal; - unsigned long uintval; - Py_ssize_t size = Py_SIZE(op1); - const digit* digits = ((PyLongObject*)op1)->ob_digit; - if (intval == 0) { - if (size != 0) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } else if (intval < 0) { - if (size >= 0) - Py_RETURN_TRUE; - intval = -intval; - size = -size; - } else { - if (size <= 0) - Py_RETURN_TRUE; - } - uintval = (unsigned long) intval; -#if PyLong_SHIFT * 4 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 4)) { - unequal = (size != 5) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[4] != ((uintval >> (4 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 3 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 3)) { - unequal = (size != 4) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 2 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 2)) { - unequal = (size != 3) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 1 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 1)) { - unequal = (size != 2) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif - unequal = (size != 1) || (((unsigned long) digits[0]) != (uintval & (unsigned long) PyLong_MASK)); - if (unequal != 0) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - if ((double)a != (double)b) Py_RETURN_TRUE; else Py_RETURN_FALSE; - } - return ( - PyObject_RichCompare(op1, op2, Py_NE)); -} - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_MultiplyCObj(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op2))) { - const long a = intval; - long b = PyInt_AS_LONG(op2); - -#ifdef HAVE_LONG_LONG - if (sizeof(PY_LONG_LONG) > sizeof(long)) { - PY_LONG_LONG result = (PY_LONG_LONG)a * (PY_LONG_LONG)b; - return (result >= LONG_MIN && result <= LONG_MAX) ? - PyInt_FromLong((long)result) : PyLong_FromLongLong(result); - } -#endif -#if CYTHON_USE_TYPE_SLOTS - return PyInt_Type.tp_as_number->nb_multiply(op1, op2); -#else - return PyNumber_Multiply(op1, op2); -#endif - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op2))) { - const long a = intval; - long b, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG lla = intval; - PY_LONG_LONG llb, llx; -#endif - const digit* digits = ((PyLongObject*)op2)->ob_digit; - const Py_ssize_t size = Py_SIZE(op2); - if (unlikely(size == 0)) { - return __Pyx_NewRef(op2); - } - if (likely(__Pyx_sst_abs(size) <= 1)) { - b = likely(size) ? digits[0] : 0; - if (size == -1) b = -b; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT+30) { - b = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT+30) { - llb = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT+30) { - b = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT+30) { - llb = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT+30) { - b = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT+30) { - llb = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT+30) { - b = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT+30) { - llb = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT+30) { - b = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT+30) { - llb = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT+30) { - b = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT+30) { - llb = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_multiply(op1, op2); - } - } - (void)a; (void)b; - #ifdef HAVE_LONG_LONG - llb = b; - goto long_long; - #else - return PyLong_Type.tp_as_number->nb_multiply(op1, op2); - #endif - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla * llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op2)) { - const long a = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double b = __pyx_PyFloat_AsDouble(op2); -#else - double b = PyFloat_AS_DOUBLE(op2); -#endif - double result; - - PyFPE_START_PROTECT("multiply", return NULL) - result = ((double)a) * (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceMultiply : PyNumber_Multiply)(op1, op2); -} -#endif - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_MultiplyObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long a = PyInt_AS_LONG(op1); - -#ifdef HAVE_LONG_LONG - if (sizeof(PY_LONG_LONG) > sizeof(long)) { - PY_LONG_LONG result = (PY_LONG_LONG)a * (PY_LONG_LONG)b; - return (result >= LONG_MIN && result <= LONG_MAX) ? - PyInt_FromLong((long)result) : PyLong_FromLongLong(result); - } -#endif -#if CYTHON_USE_TYPE_SLOTS - return PyInt_Type.tp_as_number->nb_multiply(op1, op2); -#else - return PyNumber_Multiply(op1, op2); -#endif - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (unlikely(size == 0)) { - return __Pyx_NewRef(op1); - } - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT+30) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT+30) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT+30) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT+30) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT+30) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT+30) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT+30) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT+30) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT+30) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT+30) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT+30) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT+30) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_multiply(op1, op2); - } - } - (void)a; (void)b; - #ifdef HAVE_LONG_LONG - lla = a; - goto long_long; - #else - return PyLong_Type.tp_as_number->nb_multiply(op1, op2); - #endif - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla * llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - double result; - - PyFPE_START_PROTECT("multiply", return NULL) - result = ((double)a) * (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceMultiply : PyNumber_Multiply)(op1, op2); -} -#endif - -/* RaiseUnboundLocalError */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* PyObjectCall2Args */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args[3] = {NULL, arg1, arg2}; - return __Pyx_PyObject_FastCall(function, args+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectGetMethod */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method) { - PyObject *attr; -#if CYTHON_UNPACK_METHODS && CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_PYTYPE_LOOKUP - __Pyx_TypeName type_name; - PyTypeObject *tp = Py_TYPE(obj); - PyObject *descr; - descrgetfunc f = NULL; - PyObject **dictptr, *dict; - int meth_found = 0; - assert (*method == NULL); - if (unlikely(tp->tp_getattro != PyObject_GenericGetAttr)) { - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; - } - if (unlikely(tp->tp_dict == NULL) && unlikely(PyType_Ready(tp) < 0)) { - return 0; - } - descr = _PyType_Lookup(tp, name); - if (likely(descr != NULL)) { - Py_INCREF(descr); -#if defined(Py_TPFLAGS_METHOD_DESCRIPTOR) && Py_TPFLAGS_METHOD_DESCRIPTOR - if (__Pyx_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)) -#elif PY_MAJOR_VERSION >= 3 - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type))) - #endif -#else - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr))) - #endif -#endif - { - meth_found = 1; - } else { - f = Py_TYPE(descr)->tp_descr_get; - if (f != NULL && PyDescr_IsData(descr)) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - } - } - dictptr = _PyObject_GetDictPtr(obj); - if (dictptr != NULL && (dict = *dictptr) != NULL) { - Py_INCREF(dict); - attr = __Pyx_PyDict_GetItemStr(dict, name); - if (attr != NULL) { - Py_INCREF(attr); - Py_DECREF(dict); - Py_XDECREF(descr); - goto try_unpack; - } - Py_DECREF(dict); - } - if (meth_found) { - *method = descr; - return 1; - } - if (f != NULL) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - if (likely(descr != NULL)) { - *method = descr; - return 0; - } - type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return 0; -#else - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; -#endif -try_unpack: -#if CYTHON_UNPACK_METHODS - if (likely(attr) && PyMethod_Check(attr) && likely(PyMethod_GET_SELF(attr) == obj)) { - PyObject *function = PyMethod_GET_FUNCTION(attr); - Py_INCREF(function); - Py_DECREF(attr); - *method = function; - return 1; - } -#endif - *method = attr; - return 0; -} - -/* PyObjectCallMethod1 */ -static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject* arg) { - PyObject *result = __Pyx_PyObject_CallOneArg(method, arg); - Py_DECREF(method); - return result; -} -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg) { - PyObject *method = NULL, *result; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_Call2Args(method, obj, arg); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) return NULL; - return __Pyx__PyObject_CallMethod1(method, arg); -} - -/* append */ -static CYTHON_INLINE int __Pyx_PyObject_Append(PyObject* L, PyObject* x) { - if (likely(PyList_CheckExact(L))) { - if (unlikely(__Pyx_PyList_Append(L, x) < 0)) return -1; - } else { - PyObject* retval = __Pyx_PyObject_CallMethod1(L, __pyx_n_s_append, x); - if (unlikely(!retval)) - return -1; - Py_DECREF(retval); - } - return 0; -} - -/* SliceTupleAndList */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE void __Pyx_crop_slice(Py_ssize_t* _start, Py_ssize_t* _stop, Py_ssize_t* _length) { - Py_ssize_t start = *_start, stop = *_stop, length = *_length; - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - else if (stop > length) - stop = length; - *_length = stop - start; - *_start = start; - *_stop = stop; -} -static CYTHON_INLINE PyObject* __Pyx_PyList_GetSlice( - PyObject* src, Py_ssize_t start, Py_ssize_t stop) { - Py_ssize_t length = PyList_GET_SIZE(src); - __Pyx_crop_slice(&start, &stop, &length); - return __Pyx_PyList_FromArray(((PyListObject*)src)->ob_item + start, length); -} -static CYTHON_INLINE PyObject* __Pyx_PyTuple_GetSlice( - PyObject* src, Py_ssize_t start, Py_ssize_t stop) { - Py_ssize_t length = PyTuple_GET_SIZE(src); - __Pyx_crop_slice(&start, &stop, &length); - return __Pyx_PyTuple_FromArray(((PyTupleObject*)src)->ob_item + start, length); -} -#endif - -/* PyObjectLookupSpecial */ -#if CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx__PyObject_LookupSpecial(PyObject* obj, PyObject* attr_name, int with_error) { - PyObject *res; - PyTypeObject *tp = Py_TYPE(obj); -#if PY_MAJOR_VERSION < 3 - if (unlikely(PyInstance_Check(obj))) - return with_error ? __Pyx_PyObject_GetAttrStr(obj, attr_name) : __Pyx_PyObject_GetAttrStrNoError(obj, attr_name); -#endif - res = _PyType_Lookup(tp, attr_name); - if (likely(res)) { - descrgetfunc f = Py_TYPE(res)->tp_descr_get; - if (!f) { - Py_INCREF(res); - } else { - res = f(res, obj, (PyObject *)tp); - } - } else if (with_error) { - PyErr_SetObject(PyExc_AttributeError, attr_name); - } - return res; -} -#endif - -/* SliceObject */ -static CYTHON_INLINE int __Pyx_PyObject_SetSlice(PyObject* obj, PyObject* value, - Py_ssize_t cstart, Py_ssize_t cstop, - PyObject** _py_start, PyObject** _py_stop, PyObject** _py_slice, - int has_cstart, int has_cstop, int wraparound) { - __Pyx_TypeName obj_type_name; -#if CYTHON_USE_TYPE_SLOTS - PyMappingMethods* mp; -#if PY_MAJOR_VERSION < 3 - PySequenceMethods* ms = Py_TYPE(obj)->tp_as_sequence; - if (likely(ms && ms->sq_ass_slice)) { - if (!has_cstart) { - if (_py_start && (*_py_start != Py_None)) { - cstart = __Pyx_PyIndex_AsSsize_t(*_py_start); - if ((cstart == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstart = 0; - } - if (!has_cstop) { - if (_py_stop && (*_py_stop != Py_None)) { - cstop = __Pyx_PyIndex_AsSsize_t(*_py_stop); - if ((cstop == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; - } else - cstop = PY_SSIZE_T_MAX; - } - if (wraparound && unlikely((cstart < 0) | (cstop < 0)) && likely(ms->sq_length)) { - Py_ssize_t l = ms->sq_length(obj); - if (likely(l >= 0)) { - if (cstop < 0) { - cstop += l; - if (cstop < 0) cstop = 0; - } - if (cstart < 0) { - cstart += l; - if (cstart < 0) cstart = 0; - } - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - goto bad; - PyErr_Clear(); - } - } - return ms->sq_ass_slice(obj, cstart, cstop, value); - } -#else - CYTHON_UNUSED_VAR(wraparound); -#endif - mp = Py_TYPE(obj)->tp_as_mapping; - if (likely(mp && mp->mp_ass_subscript)) -#else - CYTHON_UNUSED_VAR(wraparound); -#endif - { - int result; - PyObject *py_slice, *py_start, *py_stop; - if (_py_slice) { - py_slice = *_py_slice; - } else { - PyObject* owned_start = NULL; - PyObject* owned_stop = NULL; - if (_py_start) { - py_start = *_py_start; - } else { - if (has_cstart) { - owned_start = py_start = PyInt_FromSsize_t(cstart); - if (unlikely(!py_start)) goto bad; - } else - py_start = Py_None; - } - if (_py_stop) { - py_stop = *_py_stop; - } else { - if (has_cstop) { - owned_stop = py_stop = PyInt_FromSsize_t(cstop); - if (unlikely(!py_stop)) { - Py_XDECREF(owned_start); - goto bad; - } - } else - py_stop = Py_None; - } - py_slice = PySlice_New(py_start, py_stop, Py_None); - Py_XDECREF(owned_start); - Py_XDECREF(owned_stop); - if (unlikely(!py_slice)) goto bad; - } -#if CYTHON_USE_TYPE_SLOTS - result = mp->mp_ass_subscript(obj, py_slice, value); -#else - result = value ? PyObject_SetItem(obj, py_slice, value) : PyObject_DelItem(obj, py_slice); -#endif - if (!_py_slice) { - Py_DECREF(py_slice); - } - return result; - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object does not support slice %.10s", - obj_type_name, value ? "assignment" : "deletion"); - __Pyx_DECREF_TypeName(obj_type_name); -bad: - return -1; -} - -/* FixUpExtensionType */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type) { -#if PY_VERSION_HEX > 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - (void) spec; - (void) type; -#else - const PyType_Slot *slot = spec->slots; - while (slot && slot->slot && slot->slot != Py_tp_members) - slot++; - if (slot && slot->slot == Py_tp_members) { - int changed = 0; -#if !(PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON) - const -#endif - PyMemberDef *memb = (PyMemberDef*) slot->pfunc; - while (memb && memb->name) { - if (memb->name[0] == '_' && memb->name[1] == '_') { -#if PY_VERSION_HEX < 0x030900b1 - if (strcmp(memb->name, "__weaklistoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_weaklistoffset = memb->offset; - changed = 1; - } - else if (strcmp(memb->name, "__dictoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_dictoffset = memb->offset; - changed = 1; - } -#if CYTHON_METH_FASTCALL - else if (strcmp(memb->name, "__vectorcalloffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); -#if PY_VERSION_HEX >= 0x030800b4 - type->tp_vectorcall_offset = memb->offset; -#else - type->tp_print = (printfunc) memb->offset; -#endif - changed = 1; - } -#endif -#else - if ((0)); -#endif -#if PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON - else if (strcmp(memb->name, "__module__") == 0) { - PyObject *descr; - assert(memb->type == T_OBJECT); - assert(memb->flags == 0 || memb->flags == READONLY); - descr = PyDescr_NewMember(type, memb); - if (unlikely(!descr)) - return -1; - if (unlikely(PyDict_SetItem(type->tp_dict, PyDescr_NAME(descr), descr) < 0)) { - Py_DECREF(descr); - return -1; - } - Py_DECREF(descr); - changed = 1; - } -#endif - } - memb++; - } - if (changed) - PyType_Modified(type); - } -#endif - return 0; -} -#endif - -/* PyObjectCallNoArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { - PyObject *arg = NULL; - return __Pyx_PyObject_FastCall(func, (&arg)+1, 0 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectCallMethod0 */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name) { - PyObject *method = NULL, *result = NULL; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_CallOneArg(method, obj); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) goto bad; - result = __Pyx_PyObject_CallNoArg(method); - Py_DECREF(method); -bad: - return result; -} - -/* ValidateBasesTuple */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases) { - Py_ssize_t i, n = PyTuple_GET_SIZE(bases); - for (i = 1; i < n; i++) - { - PyObject *b0 = PyTuple_GET_ITEM(bases, i); - PyTypeObject *b; -#if PY_MAJOR_VERSION < 3 - if (PyClass_Check(b0)) - { - PyErr_Format(PyExc_TypeError, "base class '%.200s' is an old-style class", - PyString_AS_STRING(((PyClassObject*)b0)->cl_name)); - return -1; - } -#endif - b = (PyTypeObject*) b0; - if (!__Pyx_PyType_HasFeature(b, Py_TPFLAGS_HEAPTYPE)) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "base class '" __Pyx_FMT_TYPENAME "' is not a heap type", b_name); - __Pyx_DECREF_TypeName(b_name); - return -1; - } - if (dictoffset == 0 && b->tp_dictoffset) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "extension type '%.200s' has no __dict__ slot, " - "but base type '" __Pyx_FMT_TYPENAME "' has: " - "either add 'cdef dict __dict__' to the extension type " - "or add '__slots__ = [...]' to the base type", - type_name, b_name); - __Pyx_DECREF_TypeName(b_name); - return -1; - } - } - return 0; -} -#endif - -/* PyType_Ready */ -static int __Pyx_PyType_Ready(PyTypeObject *t) { -#if CYTHON_USE_TYPE_SPECS || !(CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API) || defined(PYSTON_MAJOR_VERSION) - (void)__Pyx_PyObject_CallMethod0; -#if CYTHON_USE_TYPE_SPECS - (void)__Pyx_validate_bases_tuple; -#endif - return PyType_Ready(t); -#else - int r; - PyObject *bases = __Pyx_PyType_GetSlot(t, tp_bases, PyObject*); - if (bases && unlikely(__Pyx_validate_bases_tuple(t->tp_name, t->tp_dictoffset, bases) == -1)) - return -1; -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - { - int gc_was_enabled; - #if PY_VERSION_HEX >= 0x030A00b1 - gc_was_enabled = PyGC_Disable(); - (void)__Pyx_PyObject_CallMethod0; - #else - PyObject *ret, *py_status; - PyObject *gc = NULL; - #if PY_VERSION_HEX >= 0x030700a1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM+0 >= 0x07030400) - gc = PyImport_GetModule(__pyx_kp_u_gc); - #endif - if (unlikely(!gc)) gc = PyImport_Import(__pyx_kp_u_gc); - if (unlikely(!gc)) return -1; - py_status = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_isenabled); - if (unlikely(!py_status)) { - Py_DECREF(gc); - return -1; - } - gc_was_enabled = __Pyx_PyObject_IsTrue(py_status); - Py_DECREF(py_status); - if (gc_was_enabled > 0) { - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_disable); - if (unlikely(!ret)) { - Py_DECREF(gc); - return -1; - } - Py_DECREF(ret); - } else if (unlikely(gc_was_enabled == -1)) { - Py_DECREF(gc); - return -1; - } - #endif - t->tp_flags |= Py_TPFLAGS_HEAPTYPE; -#else - (void)__Pyx_PyObject_CallMethod0; -#endif - r = PyType_Ready(t); -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - t->tp_flags &= ~Py_TPFLAGS_HEAPTYPE; - #if PY_VERSION_HEX >= 0x030A00b1 - if (gc_was_enabled) - PyGC_Enable(); - #else - if (gc_was_enabled) { - PyObject *tp, *v, *tb; - PyErr_Fetch(&tp, &v, &tb); - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_enable); - if (likely(ret || r == -1)) { - Py_XDECREF(ret); - PyErr_Restore(tp, v, tb); - } else { - Py_XDECREF(tp); - Py_XDECREF(v); - Py_XDECREF(tb); - r = -1; - } - } - Py_DECREF(gc); - #endif - } -#endif - return r; -#endif -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - __Pyx_TypeName type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, attr_name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(attr_name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *module = 0; - PyObject *empty_dict = 0; - PyObject *empty_list = 0; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (unlikely(!py_import)) - goto bad; - if (!from_list) { - empty_list = PyList_New(0); - if (unlikely(!empty_list)) - goto bad; - from_list = empty_list; - } - #endif - empty_dict = PyDict_New(); - if (unlikely(!empty_dict)) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - #if CYTHON_COMPILING_IN_LIMITED_API - module = PyImport_ImportModuleLevelObject( - name, empty_dict, empty_dict, from_list, 1); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, 1); - #endif - if (unlikely(!module)) { - if (unlikely(!PyErr_ExceptionMatches(PyExc_ImportError))) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (unlikely(!py_level)) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, __pyx_d, empty_dict, from_list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - #if CYTHON_COMPILING_IN_LIMITED_API - module = PyImport_ImportModuleLevelObject( - name, empty_dict, empty_dict, from_list, level); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, level); - #endif - #endif - } - } -bad: - Py_XDECREF(empty_dict); - Py_XDECREF(empty_list); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - return module; -} - -/* ImportDottedModule */ -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Error(PyObject *name, PyObject *parts_tuple, Py_ssize_t count) { - PyObject *partial_name = NULL, *slice = NULL, *sep = NULL; - if (unlikely(PyErr_Occurred())) { - PyErr_Clear(); - } - if (likely(PyTuple_GET_SIZE(parts_tuple) == count)) { - partial_name = name; - } else { - slice = PySequence_GetSlice(parts_tuple, 0, count); - if (unlikely(!slice)) - goto bad; - sep = PyUnicode_FromStringAndSize(".", 1); - if (unlikely(!sep)) - goto bad; - partial_name = PyUnicode_Join(sep, slice); - } - PyErr_Format( -#if PY_MAJOR_VERSION < 3 - PyExc_ImportError, - "No module named '%s'", PyString_AS_STRING(partial_name)); -#else -#if PY_VERSION_HEX >= 0x030600B1 - PyExc_ModuleNotFoundError, -#else - PyExc_ImportError, -#endif - "No module named '%U'", partial_name); -#endif -bad: - Py_XDECREF(sep); - Py_XDECREF(slice); - Py_XDECREF(partial_name); - return NULL; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Lookup(PyObject *name) { - PyObject *imported_module; -#if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - return NULL; - imported_module = __Pyx_PyDict_GetItemStr(modules, name); - Py_XINCREF(imported_module); -#else - imported_module = PyImport_GetModule(name); -#endif - return imported_module; -} -#endif -static PyObject *__Pyx__ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if PY_MAJOR_VERSION < 3 - PyObject *module, *from_list, *star = __pyx_n_s__3; - CYTHON_UNUSED_VAR(parts_tuple); - from_list = PyList_New(1); - if (unlikely(!from_list)) - return NULL; - Py_INCREF(star); - PyList_SET_ITEM(from_list, 0, star); - module = __Pyx_Import(name, from_list, 0); - Py_DECREF(from_list); - return module; -#else - Py_ssize_t i, nparts; - PyObject *imported_module; - PyObject *module = __Pyx_Import(name, NULL, 0); - if (!parts_tuple || unlikely(!module)) - return module; - imported_module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(imported_module)) { - Py_DECREF(module); - return imported_module; - } - PyErr_Clear(); - nparts = PyTuple_GET_SIZE(parts_tuple); - for (i=1; i < nparts && module; i++) { - PyObject *part, *submodule; -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - part = PyTuple_GET_ITEM(parts_tuple, i); -#else - part = PySequence_ITEM(parts_tuple, i); -#endif - submodule = __Pyx_PyObject_GetAttrStrNoError(module, part); -#if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(part); -#endif - Py_DECREF(module); - module = submodule; - } - if (likely(module)) - return module; - return __Pyx__ImportDottedModule_Error(name, parts_tuple, i); -#endif -} -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030400B1 - PyObject *module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(module)) { - PyObject *spec = __Pyx_PyObject_GetAttrStrNoError(module, __pyx_n_s_spec); - if (likely(spec)) { - PyObject *unsafe = __Pyx_PyObject_GetAttrStrNoError(spec, __pyx_n_s_initializing); - if (likely(!unsafe || !__Pyx_PyObject_IsTrue(unsafe))) { - Py_DECREF(spec); - spec = NULL; - } - Py_XDECREF(unsafe); - } - if (likely(!spec)) { - PyErr_Clear(); - return module; - } - Py_DECREF(spec); - Py_DECREF(module); - } else if (PyErr_Occurred()) { - PyErr_Clear(); - } -#endif - return __Pyx__ImportDottedModule(name, parts_tuple); -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - const char* module_name_str = 0; - PyObject* module_name = 0; - PyObject* module_dot = 0; - PyObject* full_name = 0; - PyErr_Clear(); - module_name_str = PyModule_GetName(module); - if (unlikely(!module_name_str)) { goto modbad; } - module_name = PyUnicode_FromString(module_name_str); - if (unlikely(!module_name)) { goto modbad; } - module_dot = PyUnicode_Concat(module_name, __pyx_kp_u__5); - if (unlikely(!module_dot)) { goto modbad; } - full_name = PyUnicode_Concat(module_dot, name); - if (unlikely(!full_name)) { goto modbad; } - #if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - { - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - goto modbad; - value = PyObject_GetItem(modules, full_name); - } - #else - value = PyImport_GetModule(full_name); - #endif - modbad: - Py_XDECREF(full_name); - Py_XDECREF(module_dot); - Py_XDECREF(module_name); - } - if (unlikely(!value)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* RaiseNoneIterError */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); -} - -/* UnpackTupleError */ -static void __Pyx_UnpackTupleError(PyObject *t, Py_ssize_t index) { - if (t == Py_None) { - __Pyx_RaiseNoneNotIterableError(); - } else if (PyTuple_GET_SIZE(t) < index) { - __Pyx_RaiseNeedMoreValuesError(PyTuple_GET_SIZE(t)); - } else { - __Pyx_RaiseTooManyValuesError(index); - } -} - -/* UnpackTuple2 */ -static CYTHON_INLINE int __Pyx_unpack_tuple2_exact( - PyObject* tuple, PyObject** pvalue1, PyObject** pvalue2, int decref_tuple) { - PyObject *value1 = NULL, *value2 = NULL; -#if CYTHON_COMPILING_IN_PYPY - value1 = PySequence_ITEM(tuple, 0); if (unlikely(!value1)) goto bad; - value2 = PySequence_ITEM(tuple, 1); if (unlikely(!value2)) goto bad; -#else - value1 = PyTuple_GET_ITEM(tuple, 0); Py_INCREF(value1); - value2 = PyTuple_GET_ITEM(tuple, 1); Py_INCREF(value2); -#endif - if (decref_tuple) { - Py_DECREF(tuple); - } - *pvalue1 = value1; - *pvalue2 = value2; - return 0; -#if CYTHON_COMPILING_IN_PYPY -bad: - Py_XDECREF(value1); - Py_XDECREF(value2); - if (decref_tuple) { Py_XDECREF(tuple); } - return -1; -#endif -} -static int __Pyx_unpack_tuple2_generic(PyObject* tuple, PyObject** pvalue1, PyObject** pvalue2, - int has_known_size, int decref_tuple) { - Py_ssize_t index; - PyObject *value1 = NULL, *value2 = NULL, *iter = NULL; - iternextfunc iternext; - iter = PyObject_GetIter(tuple); - if (unlikely(!iter)) goto bad; - if (decref_tuple) { Py_DECREF(tuple); tuple = NULL; } - iternext = __Pyx_PyObject_GetIterNextFunc(iter); - value1 = iternext(iter); if (unlikely(!value1)) { index = 0; goto unpacking_failed; } - value2 = iternext(iter); if (unlikely(!value2)) { index = 1; goto unpacking_failed; } - if (!has_known_size && unlikely(__Pyx_IternextUnpackEndCheck(iternext(iter), 2))) goto bad; - Py_DECREF(iter); - *pvalue1 = value1; - *pvalue2 = value2; - return 0; -unpacking_failed: - if (!has_known_size && __Pyx_IterFinish() == 0) - __Pyx_RaiseNeedMoreValuesError(index); -bad: - Py_XDECREF(iter); - Py_XDECREF(value1); - Py_XDECREF(value2); - if (decref_tuple) { Py_XDECREF(tuple); } - return -1; -} - -/* dict_iter */ -static CYTHON_INLINE PyObject* __Pyx_dict_iterator(PyObject* iterable, int is_dict, PyObject* method_name, - Py_ssize_t* p_orig_length, int* p_source_is_dict) { - is_dict = is_dict || likely(PyDict_CheckExact(iterable)); - *p_source_is_dict = is_dict; - if (is_dict) { -#if !CYTHON_COMPILING_IN_PYPY - *p_orig_length = PyDict_Size(iterable); - Py_INCREF(iterable); - return iterable; -#elif PY_MAJOR_VERSION >= 3 - static PyObject *py_items = NULL, *py_keys = NULL, *py_values = NULL; - PyObject **pp = NULL; - if (method_name) { - const char *name = PyUnicode_AsUTF8(method_name); - if (strcmp(name, "iteritems") == 0) pp = &py_items; - else if (strcmp(name, "iterkeys") == 0) pp = &py_keys; - else if (strcmp(name, "itervalues") == 0) pp = &py_values; - if (pp) { - if (!*pp) { - *pp = PyUnicode_FromString(name + 4); - if (!*pp) - return NULL; - } - method_name = *pp; - } - } -#endif - } - *p_orig_length = 0; - if (method_name) { - PyObject* iter; - iterable = __Pyx_PyObject_CallMethod0(iterable, method_name); - if (!iterable) - return NULL; -#if !CYTHON_COMPILING_IN_PYPY - if (PyTuple_CheckExact(iterable) || PyList_CheckExact(iterable)) - return iterable; -#endif - iter = PyObject_GetIter(iterable); - Py_DECREF(iterable); - return iter; - } - return PyObject_GetIter(iterable); -} -static CYTHON_INLINE int __Pyx_dict_iter_next( - PyObject* iter_obj, CYTHON_NCP_UNUSED Py_ssize_t orig_length, CYTHON_NCP_UNUSED Py_ssize_t* ppos, - PyObject** pkey, PyObject** pvalue, PyObject** pitem, int source_is_dict) { - PyObject* next_item; -#if !CYTHON_COMPILING_IN_PYPY - if (source_is_dict) { - PyObject *key, *value; - if (unlikely(orig_length != PyDict_Size(iter_obj))) { - PyErr_SetString(PyExc_RuntimeError, "dictionary changed size during iteration"); - return -1; - } - if (unlikely(!PyDict_Next(iter_obj, ppos, &key, &value))) { - return 0; - } - if (pitem) { - PyObject* tuple = PyTuple_New(2); - if (unlikely(!tuple)) { - return -1; - } - Py_INCREF(key); - Py_INCREF(value); - PyTuple_SET_ITEM(tuple, 0, key); - PyTuple_SET_ITEM(tuple, 1, value); - *pitem = tuple; - } else { - if (pkey) { - Py_INCREF(key); - *pkey = key; - } - if (pvalue) { - Py_INCREF(value); - *pvalue = value; - } - } - return 1; - } else if (PyTuple_CheckExact(iter_obj)) { - Py_ssize_t pos = *ppos; - if (unlikely(pos >= PyTuple_GET_SIZE(iter_obj))) return 0; - *ppos = pos + 1; - next_item = PyTuple_GET_ITEM(iter_obj, pos); - Py_INCREF(next_item); - } else if (PyList_CheckExact(iter_obj)) { - Py_ssize_t pos = *ppos; - if (unlikely(pos >= PyList_GET_SIZE(iter_obj))) return 0; - *ppos = pos + 1; - next_item = PyList_GET_ITEM(iter_obj, pos); - Py_INCREF(next_item); - } else -#endif - { - next_item = PyIter_Next(iter_obj); - if (unlikely(!next_item)) { - return __Pyx_IterFinish(); - } - } - if (pitem) { - *pitem = next_item; - } else if (pkey && pvalue) { - if (__Pyx_unpack_tuple2(next_item, pkey, pvalue, source_is_dict, source_is_dict, 1)) - return -1; - } else if (pkey) { - *pkey = next_item; - } else { - *pvalue = next_item; - } - return 1; -} - -/* FetchCommonType */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void) { - PyObject *abi_module = PyImport_AddModule((char*) __PYX_ABI_MODULE_NAME); - if (!abi_module) return NULL; - Py_INCREF(abi_module); - return abi_module; -} -static int __Pyx_VerifyCachedType(PyObject *cached_type, - const char *name, - Py_ssize_t basicsize, - Py_ssize_t expected_basicsize) { - if (!PyType_Check(cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", name); - return -1; - } - if (basicsize != expected_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - name); - return -1; - } - return 0; -} -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) { - PyObject* abi_module; - const char* object_name; - PyTypeObject *cached_type = NULL; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - object_name = strrchr(type->tp_name, '.'); - object_name = object_name ? object_name+1 : type->tp_name; - cached_type = (PyTypeObject*) PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - if (__Pyx_VerifyCachedType( - (PyObject *)cached_type, - object_name, - cached_type->tp_basicsize, - type->tp_basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - if (PyType_Ready(type) < 0) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, (PyObject *)type) < 0) - goto bad; - Py_INCREF(type); - cached_type = type; -done: - Py_DECREF(abi_module); - return cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#else -static PyTypeObject *__Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases) { - PyObject *abi_module, *cached_type = NULL; - const char* object_name = strrchr(spec->name, '.'); - object_name = object_name ? object_name+1 : spec->name; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - cached_type = PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - Py_ssize_t basicsize; -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *py_basicsize; - py_basicsize = PyObject_GetAttrString(cached_type, "__basicsize__"); - if (unlikely(!py_basicsize)) goto bad; - basicsize = PyLong_AsSsize_t(py_basicsize); - Py_DECREF(py_basicsize); - py_basicsize = 0; - if (unlikely(basicsize == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; -#else - basicsize = likely(PyType_Check(cached_type)) ? ((PyTypeObject*) cached_type)->tp_basicsize : -1; -#endif - if (__Pyx_VerifyCachedType( - cached_type, - object_name, - basicsize, - spec->basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - (void) module; - cached_type = __Pyx_PyType_FromModuleAndSpec(abi_module, spec, bases); - if (unlikely(!cached_type)) goto bad; - if (unlikely(__Pyx_fix_up_extension_type_from_spec(spec, (PyTypeObject *) cached_type) < 0)) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, cached_type) < 0) goto bad; -done: - Py_DECREF(abi_module); - assert(cached_type == NULL || PyType_Check(cached_type)); - return (PyTypeObject *) cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#endif - -/* PyVectorcallFastCallDict */ -#if CYTHON_METH_FASTCALL -static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - PyObject *res = NULL; - PyObject *kwnames; - PyObject **newargs; - PyObject **kwvalues; - Py_ssize_t i, pos; - size_t j; - PyObject *key, *value; - unsigned long keys_are_strings; - Py_ssize_t nkw = PyDict_GET_SIZE(kw); - newargs = (PyObject **)PyMem_Malloc((nargs + (size_t)nkw) * sizeof(args[0])); - if (unlikely(newargs == NULL)) { - PyErr_NoMemory(); - return NULL; - } - for (j = 0; j < nargs; j++) newargs[j] = args[j]; - kwnames = PyTuple_New(nkw); - if (unlikely(kwnames == NULL)) { - PyMem_Free(newargs); - return NULL; - } - kwvalues = newargs + nargs; - pos = i = 0; - keys_are_strings = Py_TPFLAGS_UNICODE_SUBCLASS; - while (PyDict_Next(kw, &pos, &key, &value)) { - keys_are_strings &= Py_TYPE(key)->tp_flags; - Py_INCREF(key); - Py_INCREF(value); - PyTuple_SET_ITEM(kwnames, i, key); - kwvalues[i] = value; - i++; - } - if (unlikely(!keys_are_strings)) { - PyErr_SetString(PyExc_TypeError, "keywords must be strings"); - goto cleanup; - } - res = vc(func, newargs, nargs, kwnames); -cleanup: - Py_DECREF(kwnames); - for (i = 0; i < nkw; i++) - Py_DECREF(kwvalues[i]); - PyMem_Free(newargs); - return res; -} -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - if (likely(kw == NULL) || PyDict_GET_SIZE(kw) == 0) { - return vc(func, args, nargs, NULL); - } - return __Pyx_PyVectorcall_FastCallDict_kw(func, vc, args, nargs, kw); -} -#endif - -/* CythonFunctionShared */ -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj) { -#if PY_VERSION_HEX < 0x030900B1 - __Pyx_Py_XDECREF_SET( - __Pyx_CyFunction_GetClassObj(f), - ((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#else - __Pyx_Py_XDECREF_SET( - ((PyCMethodObject *) (f))->mm_class, - (PyTypeObject*)((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#endif -} -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, void *closure) -{ - CYTHON_UNUSED_VAR(closure); - if (unlikely(op->func_doc == NULL)) { - if (((PyCFunctionObject*)op)->m_ml->ml_doc) { -#if PY_MAJOR_VERSION >= 3 - op->func_doc = PyUnicode_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#else - op->func_doc = PyString_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#endif - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_doc, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_name == NULL)) { -#if PY_MAJOR_VERSION >= 3 - op->func_name = PyUnicode_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#else - op->func_name = PyString_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_name, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_qualname); - return op->func_qualname; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_qualname, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_dict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(op); - CYTHON_UNUSED_VAR(context); - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - CYTHON_UNUSED_VAR(context); - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyTuple_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__defaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_tuple, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_tuple; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__kwdefaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_kwdict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_kwdict; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value || value == Py_None) { - value = NULL; - } else if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - __Pyx_Py_XDECREF_SET(op->func_annotations, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->func_annotations; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyObject * -__Pyx_CyFunction_get_is_coroutine(__pyx_CyFunctionObject *op, void *context) { - int is_coroutine; - CYTHON_UNUSED_VAR(context); - if (op->func_is_coroutine) { - return __Pyx_NewRef(op->func_is_coroutine); - } - is_coroutine = op->flags & __Pyx_CYFUNCTION_COROUTINE; -#if PY_VERSION_HEX >= 0x03050000 - if (is_coroutine) { - PyObject *module, *fromlist, *marker = __pyx_n_s_is_coroutine; - fromlist = PyList_New(1); - if (unlikely(!fromlist)) return NULL; - Py_INCREF(marker); - PyList_SET_ITEM(fromlist, 0, marker); - module = PyImport_ImportModuleLevelObject(__pyx_n_s_asyncio_coroutines, NULL, NULL, fromlist, 0); - Py_DECREF(fromlist); - if (unlikely(!module)) goto ignore; - op->func_is_coroutine = __Pyx_PyObject_GetAttrStr(module, marker); - Py_DECREF(module); - if (likely(op->func_is_coroutine)) { - return __Pyx_NewRef(op->func_is_coroutine); - } -ignore: - PyErr_Clear(); - } -#endif - op->func_is_coroutine = __Pyx_PyBool_FromLong(is_coroutine); - return __Pyx_NewRef(op->func_is_coroutine); -} -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {(char *) "_is_coroutine", (getter)__Pyx_CyFunction_get_is_coroutine, 0, 0, 0}, - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { - {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), 0, 0}, -#if CYTHON_USE_TYPE_SPECS - {(char *) "__dictoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_dict), READONLY, 0}, -#if CYTHON_METH_FASTCALL -#if CYTHON_BACKPORT_VECTORCALL - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_vectorcall), READONLY, 0}, -#else - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(PyCFunctionObject, vectorcall), READONLY, 0}, -#endif -#endif -#if PY_VERSION_HEX < 0x030500A0 - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_weakreflist), READONLY, 0}, -#else - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(PyCFunctionObject, m_weakreflist), READONLY, 0}, -#endif -#endif - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, PyObject *args) -{ - CYTHON_UNUSED_VAR(args); -#if PY_MAJOR_VERSION >= 3 - Py_INCREF(m->func_qualname); - return m->func_qualname; -#else - return PyString_FromString(((PyCFunctionObject*)m)->m_ml->ml_name); -#endif -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if PY_VERSION_HEX < 0x030500A0 -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) (((PyCFunctionObject*)cyfunc)->m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyCFunctionObject *cf = (PyCFunctionObject*) op; - if (unlikely(op == NULL)) - return NULL; - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; - cf->m_ml = ml; - cf->m_self = (PyObject *) op; - Py_XINCREF(closure); - op->func_closure = closure; - Py_XINCREF(module); - cf->m_module = module; - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; -#if PY_VERSION_HEX < 0x030900B1 - op->func_classobj = NULL; -#else - ((PyCMethodObject*)op)->mm_class = NULL; -#endif - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults_pyobjects = 0; - op->defaults_size = 0; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - op->func_is_coroutine = NULL; -#if CYTHON_METH_FASTCALL - switch (ml->ml_flags & (METH_VARARGS | METH_FASTCALL | METH_NOARGS | METH_O | METH_KEYWORDS | METH_METHOD)) { - case METH_NOARGS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_NOARGS; - break; - case METH_O: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_O; - break; - case METH_METHOD | METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD; - break; - case METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS; - break; - case METH_VARARGS | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = NULL; - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - Py_DECREF(op); - return NULL; - } -#endif - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); - Py_CLEAR(((PyCFunctionObject*)m)->m_module); - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); -#if PY_VERSION_HEX < 0x030900B1 - Py_CLEAR(__Pyx_CyFunction_GetClassObj(m)); -#else - { - PyObject *cls = (PyObject*) ((PyCMethodObject *) (m))->mm_class; - ((PyCMethodObject *) (m))->mm_class = NULL; - Py_XDECREF(cls); - } -#endif - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - Py_CLEAR(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_XDECREF(pydefaults[i]); - PyObject_Free(m->defaults); - m->defaults = NULL; - } - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - __Pyx_PyHeapTypeObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - Py_VISIT(m->func_closure); - Py_VISIT(((PyCFunctionObject*)m)->m_module); - Py_VISIT(m->func_dict); - Py_VISIT(m->func_name); - Py_VISIT(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - Py_VISIT(m->func_code); - Py_VISIT(__Pyx_CyFunction_GetClassObj(m)); - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - Py_VISIT(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_VISIT(pydefaults[i]); - } - return 0; -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ -#if PY_MAJOR_VERSION >= 3 - return PyUnicode_FromFormat("", - op->func_qualname, (void *)op); -#else - return PyString_FromFormat("", - PyString_AsString(op->func_qualname), (void *)op); -#endif -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - Py_ssize_t size; - switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 0)) - return (*meth)(self, NULL); - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - return NULL; - } - PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", - f->m_ml->ml_name); - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw); -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; -#if CYTHON_METH_FASTCALL - __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); - if (vc) { -#if CYTHON_ASSUME_SAFE_MACROS - return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); -#else - (void) &__Pyx_PyVectorcall_FastCallDict; - return PyVectorcall_Call(func, args, kw); -#endif - } -#endif - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; - argc = PyTuple_GET_SIZE(args); - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE int __Pyx_CyFunction_Vectorcall_CheckArgs(__pyx_CyFunctionObject *cyfunc, Py_ssize_t nargs, PyObject *kwnames) -{ - int ret = 0; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - if (unlikely(nargs < 1)) { - PyErr_Format(PyExc_TypeError, "%.200s() needs an argument", - ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - ret = 1; - } - if (unlikely(kwnames) && unlikely(PyTuple_GET_SIZE(kwnames))) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no keyword arguments", ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - return ret; -} -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 0)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, NULL); -} -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 1)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, args[0]); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((_PyCFunctionFastWithKeywords)(void(*)(void))def->ml_meth)(self, args, nargs, kwnames); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; - PyTypeObject *cls = (PyTypeObject *) __Pyx_CyFunction_GetClassObj(cyfunc); -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((__Pyx_PyCMethod)(void(*)(void))def->ml_meth)(self, cls, args, nargs, kwnames); -} -#endif -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_CyFunctionType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_CyFunction_dealloc}, - {Py_tp_repr, (void *)__Pyx_CyFunction_repr}, - {Py_tp_call, (void *)__Pyx_CyFunction_CallAsMethod}, - {Py_tp_traverse, (void *)__Pyx_CyFunction_traverse}, - {Py_tp_clear, (void *)__Pyx_CyFunction_clear}, - {Py_tp_methods, (void *)__pyx_CyFunction_methods}, - {Py_tp_members, (void *)__pyx_CyFunction_members}, - {Py_tp_getset, (void *)__pyx_CyFunction_getsets}, - {Py_tp_descr_get, (void *)__Pyx_PyMethod_New}, - {0, 0}, -}; -static PyType_Spec __pyx_CyFunctionType_spec = { - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#if (defined(_Py_TPFLAGS_HAVE_VECTORCALL) && CYTHON_METH_FASTCALL) - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - __pyx_CyFunctionType_slots -}; -#else -static PyTypeObject __pyx_CyFunctionType_type = { - PyVarObject_HEAD_INIT(0, 0) - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, - (destructor) __Pyx_CyFunction_dealloc, -#if !CYTHON_METH_FASTCALL - 0, -#elif CYTHON_BACKPORT_VECTORCALL - (printfunc)offsetof(__pyx_CyFunctionObject, func_vectorcall), -#else - offsetof(PyCFunctionObject, vectorcall), -#endif - 0, - 0, -#if PY_MAJOR_VERSION < 3 - 0, -#else - 0, -#endif - (reprfunc) __Pyx_CyFunction_repr, - 0, - 0, - 0, - 0, - __Pyx_CyFunction_CallAsMethod, - 0, - 0, - 0, - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#ifdef _Py_TPFLAGS_HAVE_VECTORCALL - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - 0, - (traverseproc) __Pyx_CyFunction_traverse, - (inquiry) __Pyx_CyFunction_clear, - 0, -#if PY_VERSION_HEX < 0x030500A0 - offsetof(__pyx_CyFunctionObject, func_weakreflist), -#else - offsetof(PyCFunctionObject, m_weakreflist), -#endif - 0, - 0, - __pyx_CyFunction_methods, - __pyx_CyFunction_members, - __pyx_CyFunction_getsets, - 0, - 0, - __Pyx_PyMethod_New, - 0, - offsetof(__pyx_CyFunctionObject, func_dict), - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, -#endif -}; -#endif -static int __pyx_CyFunction_init(PyObject *module) { -#if CYTHON_USE_TYPE_SPECS - __pyx_CyFunctionType = __Pyx_FetchCommonTypeFromSpec(module, &__pyx_CyFunctionType_spec, NULL); -#else - (void) module; - __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type); -#endif - if (unlikely(__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_Malloc(size); - if (unlikely(!m->defaults)) - return PyErr_NoMemory(); - memset(m->defaults, 0, size); - m->defaults_pyobjects = pyobjects; - m->defaults_size = size; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* CalculateMetaclass */ -static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases) { - Py_ssize_t i, nbases = PyTuple_GET_SIZE(bases); - for (i=0; i < nbases; i++) { - PyTypeObject *tmptype; - PyObject *tmp = PyTuple_GET_ITEM(bases, i); - tmptype = Py_TYPE(tmp); -#if PY_MAJOR_VERSION < 3 - if (tmptype == &PyClass_Type) - continue; -#endif - if (!metaclass) { - metaclass = tmptype; - continue; - } - if (PyType_IsSubtype(metaclass, tmptype)) - continue; - if (PyType_IsSubtype(tmptype, metaclass)) { - metaclass = tmptype; - continue; - } - PyErr_SetString(PyExc_TypeError, - "metaclass conflict: " - "the metaclass of a derived class " - "must be a (non-strict) subclass " - "of the metaclasses of all its bases"); - return NULL; - } - if (!metaclass) { -#if PY_MAJOR_VERSION < 3 - metaclass = &PyClass_Type; -#else - metaclass = &PyType_Type; -#endif - } - Py_INCREF((PyObject*) metaclass); - return (PyObject*) metaclass; -} - -/* Py3ClassCreate */ -static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, - PyObject *qualname, PyObject *mkw, PyObject *modname, PyObject *doc) { - PyObject *ns; - if (metaclass) { - PyObject *prep = __Pyx_PyObject_GetAttrStrNoError(metaclass, __pyx_n_s_prepare); - if (prep) { - PyObject *pargs[3] = {NULL, name, bases}; - ns = __Pyx_PyObject_FastCallDict(prep, pargs+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET, mkw); - Py_DECREF(prep); - } else { - if (unlikely(PyErr_Occurred())) - return NULL; - ns = PyDict_New(); - } - } else { - ns = PyDict_New(); - } - if (unlikely(!ns)) - return NULL; - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_module, modname) < 0)) goto bad; -#if PY_VERSION_HEX >= 0x03030000 - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_qualname, qualname) < 0)) goto bad; -#else - CYTHON_MAYBE_UNUSED_VAR(qualname); -#endif - if (unlikely(doc && PyObject_SetItem(ns, __pyx_n_s_doc, doc) < 0)) goto bad; - return ns; -bad: - Py_DECREF(ns); - return NULL; -} -#if PY_VERSION_HEX < 0x030600A4 && CYTHON_PEP487_INIT_SUBCLASS -static int __Pyx_SetNamesPEP487(PyObject *type_obj) { - PyTypeObject *type = (PyTypeObject*) type_obj; - PyObject *names_to_set, *key, *value, *set_name, *tmp; - Py_ssize_t i = 0; -#if CYTHON_USE_TYPE_SLOTS - names_to_set = PyDict_Copy(type->tp_dict); -#else - { - PyObject *d = PyObject_GetAttr(type_obj, __pyx_n_s_dict); - names_to_set = NULL; - if (likely(d)) { - PyObject *names_to_set = PyDict_New(); - int ret = likely(names_to_set) ? PyDict_Update(names_to_set, d) : -1; - Py_DECREF(d); - if (unlikely(ret < 0)) - Py_CLEAR(names_to_set); - } - } -#endif - if (unlikely(names_to_set == NULL)) - goto bad; - while (PyDict_Next(names_to_set, &i, &key, &value)) { - set_name = __Pyx_PyObject_LookupSpecialNoError(value, __pyx_n_s_set_name); - if (unlikely(set_name != NULL)) { - tmp = __Pyx_PyObject_Call2Args(set_name, type_obj, key); - Py_DECREF(set_name); - if (unlikely(tmp == NULL)) { - __Pyx_TypeName value_type_name = - __Pyx_PyType_GetName(Py_TYPE(value)); - __Pyx_TypeName type_name = __Pyx_PyType_GetName(type); - PyErr_Format(PyExc_RuntimeError, -#if PY_MAJOR_VERSION >= 3 - "Error calling __set_name__ on '" __Pyx_FMT_TYPENAME "' instance %R " "in '" __Pyx_FMT_TYPENAME "'", - value_type_name, key, type_name); -#else - "Error calling __set_name__ on '" __Pyx_FMT_TYPENAME "' instance %.100s in '" __Pyx_FMT_TYPENAME "'", - value_type_name, - PyString_Check(key) ? PyString_AS_STRING(key) : "?", - type_name); -#endif - goto bad; - } else { - Py_DECREF(tmp); - } - } - else if (unlikely(PyErr_Occurred())) { - goto bad; - } - } - Py_DECREF(names_to_set); - return 0; -bad: - Py_XDECREF(names_to_set); - return -1; -} -static PyObject *__Pyx_InitSubclassPEP487(PyObject *type_obj, PyObject *mkw) { -#if CYTHON_USE_TYPE_SLOTS && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - PyTypeObject *type = (PyTypeObject*) type_obj; - PyObject *mro = type->tp_mro; - Py_ssize_t i, nbases; - if (unlikely(!mro)) goto done; - (void) &__Pyx_GetBuiltinName; - Py_INCREF(mro); - nbases = PyTuple_GET_SIZE(mro); - assert(PyTuple_GET_ITEM(mro, 0) == type_obj); - for (i = 1; i < nbases-1; i++) { - PyObject *base, *dict, *meth; - base = PyTuple_GET_ITEM(mro, i); - dict = ((PyTypeObject *)base)->tp_dict; - meth = __Pyx_PyDict_GetItemStrWithError(dict, __pyx_n_s_init_subclass); - if (unlikely(meth)) { - descrgetfunc f = Py_TYPE(meth)->tp_descr_get; - PyObject *res; - Py_INCREF(meth); - if (likely(f)) { - res = f(meth, NULL, type_obj); - Py_DECREF(meth); - if (unlikely(!res)) goto bad; - meth = res; - } - res = __Pyx_PyObject_FastCallDict(meth, NULL, 0, mkw); - Py_DECREF(meth); - if (unlikely(!res)) goto bad; - Py_DECREF(res); - goto done; - } else if (unlikely(PyErr_Occurred())) { - goto bad; - } - } -done: - Py_XDECREF(mro); - return type_obj; -bad: - Py_XDECREF(mro); - Py_DECREF(type_obj); - return NULL; -#else - PyObject *super_type, *super, *func, *res; -#if CYTHON_COMPILING_IN_PYPY && !defined(PySuper_Type) - super_type = __Pyx_GetBuiltinName(__pyx_n_s_super); -#else - super_type = (PyObject*) &PySuper_Type; - (void) &__Pyx_GetBuiltinName; -#endif - super = likely(super_type) ? __Pyx_PyObject_Call2Args(super_type, type_obj, type_obj) : NULL; -#if CYTHON_COMPILING_IN_PYPY && !defined(PySuper_Type) - Py_XDECREF(super_type); -#endif - if (unlikely(!super)) { - Py_CLEAR(type_obj); - goto done; - } - func = __Pyx_PyObject_GetAttrStrNoError(super, __pyx_n_s_init_subclass); - Py_DECREF(super); - if (likely(!func)) { - if (unlikely(PyErr_Occurred())) - Py_CLEAR(type_obj); - goto done; - } - res = __Pyx_PyObject_FastCallDict(func, NULL, 0, mkw); - Py_DECREF(func); - if (unlikely(!res)) - Py_CLEAR(type_obj); - Py_XDECREF(res); -done: - return type_obj; -#endif -} -#endif -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, - PyObject *dict, PyObject *mkw, - int calculate_metaclass, int allow_py2_metaclass) { - PyObject *result; - PyObject *owned_metaclass = NULL; - PyObject *margs[4] = {NULL, name, bases, dict}; - if (allow_py2_metaclass) { - owned_metaclass = PyObject_GetItem(dict, __pyx_n_s_metaclass); - if (owned_metaclass) { - metaclass = owned_metaclass; - } else if (likely(PyErr_ExceptionMatches(PyExc_KeyError))) { - PyErr_Clear(); - } else { - return NULL; - } - } - if (calculate_metaclass && (!metaclass || PyType_Check(metaclass))) { - metaclass = __Pyx_CalculateMetaclass((PyTypeObject*) metaclass, bases); - Py_XDECREF(owned_metaclass); - if (unlikely(!metaclass)) - return NULL; - owned_metaclass = metaclass; - } - result = __Pyx_PyObject_FastCallDict(metaclass, margs+1, 3 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET, -#if PY_VERSION_HEX < 0x030600A4 - (metaclass == (PyObject*)&PyType_Type) ? NULL : mkw -#else - mkw -#endif - ); - Py_XDECREF(owned_metaclass); -#if PY_VERSION_HEX < 0x030600A4 && CYTHON_PEP487_INIT_SUBCLASS - if (likely(result) && likely(PyType_Check(result))) { - if (unlikely(__Pyx_SetNamesPEP487(result) < 0)) { - Py_CLEAR(result); - } else { - result = __Pyx_InitSubclassPEP487(result, mkw); - } - } -#else - (void) &__Pyx_GetBuiltinName; -#endif - return result; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStrNoError(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} -#endif - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if CYTHON_COMPILING_IN_LIMITED_API -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - if (c_line) { - (void) __pyx_cfilenm; - c_line = __Pyx_CLineForTraceback(__Pyx_PyThreadState_Current, c_line); - } - _PyTraceback_Add(funcname, filename, c_line ? -c_line : py_line); -} -#else -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) goto bad; - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} -#endif - -/* CIntToPy */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* FormatTypeName */ -#if CYTHON_COMPILING_IN_LIMITED_API -static __Pyx_TypeName -__Pyx_PyType_GetName(PyTypeObject* tp) -{ - PyObject *name = __Pyx_PyObject_GetAttrStr((PyObject *)tp, - __pyx_n_s_name_2); - if (unlikely(name == NULL) || unlikely(!PyUnicode_Check(name))) { - PyErr_Clear(); - Py_XSETREF(name, __Pyx_NewRef(__pyx_n_s__64)); - } - return name; -} -#endif - -/* CIntFromPyVerify */ -#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntFromPy */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(long) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 2 * PyLong_SHIFT)) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 3 * PyLong_SHIFT)) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 4 * PyLong_SHIFT)) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(long) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if ((8 * sizeof(long) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if ((sizeof(long) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if (CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(int) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 2 * PyLong_SHIFT)) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 3 * PyLong_SHIFT)) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 4 * PyLong_SHIFT)) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(int) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if ((8 * sizeof(int) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if ((sizeof(int) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if (CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = __Pyx_PyType_GetSlot(a, tp_base, PyTypeObject*); - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (cls == a || cls == b) return 1; - mro = cls->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - PyObject *base = PyTuple_GET_ITEM(mro, i); - if (base == (PyObject *)a || base == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(cls, a) || __Pyx_InBases(cls, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - if (exc_type1) { - return __Pyx_IsAnySubtype2((PyTypeObject*)err, (PyTypeObject*)exc_type1, (PyTypeObject*)exc_type2); - } else { - return __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; iexc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* CoroutineBase */ -#include -#define __Pyx_Coroutine_Undelegate(gen) Py_CLEAR((gen)->yieldfrom) -static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *__pyx_tstate, PyObject **pvalue) { - PyObject *et, *ev, *tb; - PyObject *value = NULL; - CYTHON_UNUSED_VAR(__pyx_tstate); - __Pyx_ErrFetch(&et, &ev, &tb); - if (!et) { - Py_XDECREF(tb); - Py_XDECREF(ev); - Py_INCREF(Py_None); - *pvalue = Py_None; - return 0; - } - if (likely(et == PyExc_StopIteration)) { - if (!ev) { - Py_INCREF(Py_None); - value = Py_None; - } -#if PY_VERSION_HEX >= 0x030300A0 - else if (likely(__Pyx_IS_TYPE(ev, (PyTypeObject*)PyExc_StopIteration))) { - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); - Py_DECREF(ev); - } -#endif - else if (unlikely(PyTuple_Check(ev))) { - if (PyTuple_GET_SIZE(ev) >= 1) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - value = PyTuple_GET_ITEM(ev, 0); - Py_INCREF(value); -#else - value = PySequence_ITEM(ev, 0); -#endif - } else { - Py_INCREF(Py_None); - value = Py_None; - } - Py_DECREF(ev); - } - else if (!__Pyx_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration)) { - value = ev; - } - if (likely(value)) { - Py_XDECREF(tb); - Py_DECREF(et); - *pvalue = value; - return 0; - } - } else if (!__Pyx_PyErr_GivenExceptionMatches(et, PyExc_StopIteration)) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - PyErr_NormalizeException(&et, &ev, &tb); - if (unlikely(!PyObject_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration))) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - Py_XDECREF(tb); - Py_DECREF(et); -#if PY_VERSION_HEX >= 0x030300A0 - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); - Py_DECREF(ev); -#else - { - PyObject* args = __Pyx_PyObject_GetAttrStr(ev, __pyx_n_s_args); - Py_DECREF(ev); - if (likely(args)) { - value = PySequence_GetItem(args, 0); - Py_DECREF(args); - } - if (unlikely(!value)) { - __Pyx_ErrRestore(NULL, NULL, NULL); - Py_INCREF(Py_None); - value = Py_None; - } - } -#endif - *pvalue = value; - return 0; -} -static CYTHON_INLINE -void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *exc_state) { - PyObject *t, *v, *tb; - t = exc_state->exc_type; - v = exc_state->exc_value; - tb = exc_state->exc_traceback; - exc_state->exc_type = NULL; - exc_state->exc_value = NULL; - exc_state->exc_traceback = NULL; - Py_XDECREF(t); - Py_XDECREF(v); - Py_XDECREF(tb); -} -#define __Pyx_Coroutine_AlreadyRunningError(gen) (__Pyx__Coroutine_AlreadyRunningError(gen), (PyObject*)NULL) -static void __Pyx__Coroutine_AlreadyRunningError(__pyx_CoroutineObject *gen) { - const char *msg; - CYTHON_MAYBE_UNUSED_VAR(gen); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check((PyObject*)gen)) { - msg = "coroutine already executing"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact((PyObject*)gen)) { - msg = "async generator already executing"; - #endif - } else { - msg = "generator already executing"; - } - PyErr_SetString(PyExc_ValueError, msg); -} -#define __Pyx_Coroutine_NotStartedError(gen) (__Pyx__Coroutine_NotStartedError(gen), (PyObject*)NULL) -static void __Pyx__Coroutine_NotStartedError(PyObject *gen) { - const char *msg; - CYTHON_MAYBE_UNUSED_VAR(gen); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check(gen)) { - msg = "can't send non-None value to a just-started coroutine"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact(gen)) { - msg = "can't send non-None value to a just-started async generator"; - #endif - } else { - msg = "can't send non-None value to a just-started generator"; - } - PyErr_SetString(PyExc_TypeError, msg); -} -#define __Pyx_Coroutine_AlreadyTerminatedError(gen, value, closing) (__Pyx__Coroutine_AlreadyTerminatedError(gen, value, closing), (PyObject*)NULL) -static void __Pyx__Coroutine_AlreadyTerminatedError(PyObject *gen, PyObject *value, int closing) { - CYTHON_MAYBE_UNUSED_VAR(gen); - CYTHON_MAYBE_UNUSED_VAR(closing); - #ifdef __Pyx_Coroutine_USED - if (!closing && __Pyx_Coroutine_Check(gen)) { - PyErr_SetString(PyExc_RuntimeError, "cannot reuse already awaited coroutine"); - } else - #endif - if (value) { - #ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(gen)) - PyErr_SetNone(__Pyx_PyExc_StopAsyncIteration); - else - #endif - PyErr_SetNone(PyExc_StopIteration); - } -} -static -PyObject *__Pyx_Coroutine_SendEx(__pyx_CoroutineObject *self, PyObject *value, int closing) { - __Pyx_PyThreadState_declare - PyThreadState *tstate; - __Pyx_ExcInfoStruct *exc_state; - PyObject *retval; - assert(!self->is_running); - if (unlikely(self->resume_label == 0)) { - if (unlikely(value && value != Py_None)) { - return __Pyx_Coroutine_NotStartedError((PyObject*)self); - } - } - if (unlikely(self->resume_label == -1)) { - return __Pyx_Coroutine_AlreadyTerminatedError((PyObject*)self, value, closing); - } -#if CYTHON_FAST_THREAD_STATE - __Pyx_PyThreadState_assign - tstate = __pyx_tstate; -#else - tstate = __Pyx_PyThreadState_Current; -#endif - exc_state = &self->gi_exc_state; - if (exc_state->exc_type) { - #if CYTHON_COMPILING_IN_PYPY - #else - if (exc_state->exc_traceback) { - PyTracebackObject *tb = (PyTracebackObject *) exc_state->exc_traceback; - PyFrameObject *f = tb->tb_frame; - assert(f->f_back == NULL); - #if PY_VERSION_HEX >= 0x030B00A1 - f->f_back = PyThreadState_GetFrame(tstate); - #else - Py_XINCREF(tstate->frame); - f->f_back = tstate->frame; - #endif - } - #endif - } -#if CYTHON_USE_EXC_INFO_STACK - exc_state->previous_item = tstate->exc_info; - tstate->exc_info = exc_state; -#else - if (exc_state->exc_type) { - __Pyx_ExceptionSwap(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } else { - __Pyx_Coroutine_ExceptionClear(exc_state); - __Pyx_ExceptionSave(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } -#endif - self->is_running = 1; - retval = self->body(self, tstate, value); - self->is_running = 0; -#if CYTHON_USE_EXC_INFO_STACK - exc_state = &self->gi_exc_state; - tstate->exc_info = exc_state->previous_item; - exc_state->previous_item = NULL; - __Pyx_Coroutine_ResetFrameBackpointer(exc_state); -#endif - return retval; -} -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state) { - PyObject *exc_tb = exc_state->exc_traceback; - if (likely(exc_tb)) { -#if CYTHON_COMPILING_IN_PYPY -#else - PyTracebackObject *tb = (PyTracebackObject *) exc_tb; - PyFrameObject *f = tb->tb_frame; - Py_CLEAR(f->f_back); -#endif - } -} -static CYTHON_INLINE -PyObject *__Pyx_Coroutine_MethodReturn(PyObject* gen, PyObject *retval) { - CYTHON_MAYBE_UNUSED_VAR(gen); - if (unlikely(!retval)) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (!__Pyx_PyErr_Occurred()) { - PyObject *exc = PyExc_StopIteration; - #ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(gen)) - exc = __Pyx_PyExc_StopAsyncIteration; - #endif - __Pyx_PyErr_SetNone(exc); - } - } - return retval; -} -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) -static CYTHON_INLINE -PyObject *__Pyx_PyGen_Send(PyGenObject *gen, PyObject *arg) { -#if PY_VERSION_HEX <= 0x030A00A1 - return _PyGen_Send(gen, arg); -#else - PyObject *result; - if (PyIter_Send((PyObject*)gen, arg ? arg : Py_None, &result) == PYGEN_RETURN) { - if (PyAsyncGen_CheckExact(gen)) { - assert(result == Py_None); - PyErr_SetNone(PyExc_StopAsyncIteration); - } - else if (result == Py_None) { - PyErr_SetNone(PyExc_StopIteration); - } - else { - _PyGen_SetStopIterationValue(result); - } - Py_CLEAR(result); - } - return result; -#endif -} -#endif -static CYTHON_INLINE -PyObject *__Pyx_Coroutine_FinishDelegation(__pyx_CoroutineObject *gen) { - PyObject *ret; - PyObject *val = NULL; - __Pyx_Coroutine_Undelegate(gen); - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, &val); - ret = __Pyx_Coroutine_SendEx(gen, val, 0); - Py_XDECREF(val); - return ret; -} -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value) { - PyObject *retval; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - gen->is_running = 1; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - ret = __Pyx_async_gen_asend_send(yf, value); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03050000 && defined(PyCoro_CheckExact) && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyCoro_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - #endif - { - if (value == Py_None) - ret = __Pyx_PyObject_GetIterNextFunc(yf)(yf); - else - ret = __Pyx_PyObject_CallMethod1(yf, __pyx_n_s_send, value); - } - gen->is_running = 0; - if (likely(ret)) { - return ret; - } - retval = __Pyx_Coroutine_FinishDelegation(gen); - } else { - retval = __Pyx_Coroutine_SendEx(gen, value, 0); - } - return __Pyx_Coroutine_MethodReturn(self, retval); -} -static int __Pyx_Coroutine_CloseIter(__pyx_CoroutineObject *gen, PyObject *yf) { - PyObject *retval = NULL; - int err = 0; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - retval = __Pyx_Coroutine_Close(yf); - if (!retval) - return -1; - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - retval = __Pyx_Coroutine_Close(yf); - if (!retval) - return -1; - } else - if (__Pyx_CoroutineAwait_CheckExact(yf)) { - retval = __Pyx_CoroutineAwait_Close((__pyx_CoroutineAwaitObject*)yf, NULL); - if (!retval) - return -1; - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - retval = __Pyx_async_gen_asend_close(yf, NULL); - } else - if (__pyx_PyAsyncGenAThrow_CheckExact(yf)) { - retval = __Pyx_async_gen_athrow_close(yf, NULL); - } else - #endif - { - PyObject *meth; - gen->is_running = 1; - meth = __Pyx_PyObject_GetAttrStrNoError(yf, __pyx_n_s_close); - if (unlikely(!meth)) { - if (unlikely(PyErr_Occurred())) { - PyErr_WriteUnraisable(yf); - } - } else { - retval = __Pyx_PyObject_CallNoArg(meth); - Py_DECREF(meth); - if (unlikely(!retval)) - err = -1; - } - gen->is_running = 0; - } - Py_XDECREF(retval); - return err; -} -static PyObject *__Pyx_Generator_Next(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - gen->is_running = 1; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Generator_Next(yf); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, NULL); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - ret = __Pyx_Coroutine_Send(yf, Py_None); - } else - #endif - ret = __Pyx_PyObject_GetIterNextFunc(yf)(yf); - gen->is_running = 0; - if (likely(ret)) { - return ret; - } - return __Pyx_Coroutine_FinishDelegation(gen); - } - return __Pyx_Coroutine_SendEx(gen, Py_None, 0); -} -static PyObject *__Pyx_Coroutine_Close_Method(PyObject *self, PyObject *arg) { - CYTHON_UNUSED_VAR(arg); - return __Pyx_Coroutine_Close(self); -} -static PyObject *__Pyx_Coroutine_Close(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject *retval, *raised_exception; - PyObject *yf = gen->yieldfrom; - int err = 0; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - Py_INCREF(yf); - err = __Pyx_Coroutine_CloseIter(gen, yf); - __Pyx_Coroutine_Undelegate(gen); - Py_DECREF(yf); - } - if (err == 0) - PyErr_SetNone(PyExc_GeneratorExit); - retval = __Pyx_Coroutine_SendEx(gen, NULL, 1); - if (unlikely(retval)) { - const char *msg; - Py_DECREF(retval); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check(self)) { - msg = "coroutine ignored GeneratorExit"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact(self)) { -#if PY_VERSION_HEX < 0x03060000 - msg = "async generator ignored GeneratorExit - might require Python 3.6+ finalisation (PEP 525)"; -#else - msg = "async generator ignored GeneratorExit"; -#endif - #endif - } else { - msg = "generator ignored GeneratorExit"; - } - PyErr_SetString(PyExc_RuntimeError, msg); - return NULL; - } - raised_exception = PyErr_Occurred(); - if (likely(!raised_exception || __Pyx_PyErr_GivenExceptionMatches2(raised_exception, PyExc_GeneratorExit, PyExc_StopIteration))) { - if (raised_exception) PyErr_Clear(); - Py_INCREF(Py_None); - return Py_None; - } - return NULL; -} -static PyObject *__Pyx__Coroutine_Throw(PyObject *self, PyObject *typ, PyObject *val, PyObject *tb, - PyObject *args, int close_on_genexit) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - Py_INCREF(yf); - if (__Pyx_PyErr_GivenExceptionMatches(typ, PyExc_GeneratorExit) && close_on_genexit) { - int err = __Pyx_Coroutine_CloseIter(gen, yf); - Py_DECREF(yf); - __Pyx_Coroutine_Undelegate(gen); - if (err < 0) - return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0)); - goto throw_here; - } - gen->is_running = 1; - if (0 - #ifdef __Pyx_Generator_USED - || __Pyx_Generator_CheckExact(yf) - #endif - #ifdef __Pyx_Coroutine_USED - || __Pyx_Coroutine_Check(yf) - #endif - ) { - ret = __Pyx__Coroutine_Throw(yf, typ, val, tb, args, close_on_genexit); - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_CoroutineAwait_CheckExact(yf)) { - ret = __Pyx__Coroutine_Throw(((__pyx_CoroutineAwaitObject*)yf)->coroutine, typ, val, tb, args, close_on_genexit); - #endif - } else { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(yf, __pyx_n_s_throw); - if (unlikely(!meth)) { - Py_DECREF(yf); - if (unlikely(PyErr_Occurred())) { - gen->is_running = 0; - return NULL; - } - __Pyx_Coroutine_Undelegate(gen); - gen->is_running = 0; - goto throw_here; - } - if (likely(args)) { - ret = __Pyx_PyObject_Call(meth, args, NULL); - } else { - PyObject *cargs[4] = {NULL, typ, val, tb}; - ret = __Pyx_PyObject_FastCall(meth, cargs+1, 3 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); - } - Py_DECREF(meth); - } - gen->is_running = 0; - Py_DECREF(yf); - if (!ret) { - ret = __Pyx_Coroutine_FinishDelegation(gen); - } - return __Pyx_Coroutine_MethodReturn(self, ret); - } -throw_here: - __Pyx_Raise(typ, val, tb, NULL); - return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0)); -} -static PyObject *__Pyx_Coroutine_Throw(PyObject *self, PyObject *args) { - PyObject *typ; - PyObject *val = NULL; - PyObject *tb = NULL; - if (unlikely(!PyArg_UnpackTuple(args, (char *)"throw", 1, 3, &typ, &val, &tb))) - return NULL; - return __Pyx__Coroutine_Throw(self, typ, val, tb, args, 1); -} -static CYTHON_INLINE int __Pyx_Coroutine_traverse_excstate(__Pyx_ExcInfoStruct *exc_state, visitproc visit, void *arg) { - Py_VISIT(exc_state->exc_type); - Py_VISIT(exc_state->exc_value); - Py_VISIT(exc_state->exc_traceback); - return 0; -} -static int __Pyx_Coroutine_traverse(__pyx_CoroutineObject *gen, visitproc visit, void *arg) { - Py_VISIT(gen->closure); - Py_VISIT(gen->classobj); - Py_VISIT(gen->yieldfrom); - return __Pyx_Coroutine_traverse_excstate(&gen->gi_exc_state, visit, arg); -} -static int __Pyx_Coroutine_clear(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - Py_CLEAR(gen->closure); - Py_CLEAR(gen->classobj); - Py_CLEAR(gen->yieldfrom); - __Pyx_Coroutine_ExceptionClear(&gen->gi_exc_state); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - Py_CLEAR(((__pyx_PyAsyncGenObject*)gen)->ag_finalizer); - } -#endif - Py_CLEAR(gen->gi_code); - Py_CLEAR(gen->gi_frame); - Py_CLEAR(gen->gi_name); - Py_CLEAR(gen->gi_qualname); - Py_CLEAR(gen->gi_modulename); - return 0; -} -static void __Pyx_Coroutine_dealloc(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject_GC_UnTrack(gen); - if (gen->gi_weakreflist != NULL) - PyObject_ClearWeakRefs(self); - if (gen->resume_label >= 0) { - PyObject_GC_Track(self); -#if PY_VERSION_HEX >= 0x030400a1 && CYTHON_USE_TP_FINALIZE - if (unlikely(PyObject_CallFinalizerFromDealloc(self))) -#else - Py_TYPE(gen)->tp_del(self); - if (unlikely(Py_REFCNT(self) > 0)) -#endif - { - return; - } - PyObject_GC_UnTrack(self); - } -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - /* We have to handle this case for asynchronous generators - right here, because this code has to be between UNTRACK - and GC_Del. */ - Py_CLEAR(((__pyx_PyAsyncGenObject*)self)->ag_finalizer); - } -#endif - __Pyx_Coroutine_clear(self); - __Pyx_PyHeapTypeObject_GC_Del(gen); -} -static void __Pyx_Coroutine_del(PyObject *self) { - PyObject *error_type, *error_value, *error_traceback; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - __Pyx_PyThreadState_declare - if (gen->resume_label < 0) { - return; - } -#if !CYTHON_USE_TP_FINALIZE - assert(self->ob_refcnt == 0); - __Pyx_SET_REFCNT(self, 1); -#endif - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&error_type, &error_value, &error_traceback); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - __pyx_PyAsyncGenObject *agen = (__pyx_PyAsyncGenObject*)self; - PyObject *finalizer = agen->ag_finalizer; - if (finalizer && !agen->ag_closed) { - PyObject *res = __Pyx_PyObject_CallOneArg(finalizer, self); - if (unlikely(!res)) { - PyErr_WriteUnraisable(self); - } else { - Py_DECREF(res); - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); - return; - } - } -#endif - if (unlikely(gen->resume_label == 0 && !error_value)) { -#ifdef __Pyx_Coroutine_USED -#ifdef __Pyx_Generator_USED - if (!__Pyx_Generator_CheckExact(self)) -#endif - { - PyObject_GC_UnTrack(self); -#if PY_MAJOR_VERSION >= 3 || defined(PyErr_WarnFormat) - if (unlikely(PyErr_WarnFormat(PyExc_RuntimeWarning, 1, "coroutine '%.50S' was never awaited", gen->gi_qualname) < 0)) - PyErr_WriteUnraisable(self); -#else - {PyObject *msg; - char *cmsg; - #if CYTHON_COMPILING_IN_PYPY - msg = NULL; - cmsg = (char*) "coroutine was never awaited"; - #else - char *cname; - PyObject *qualname; - qualname = gen->gi_qualname; - cname = PyString_AS_STRING(qualname); - msg = PyString_FromFormat("coroutine '%.50s' was never awaited", cname); - if (unlikely(!msg)) { - PyErr_Clear(); - cmsg = (char*) "coroutine was never awaited"; - } else { - cmsg = PyString_AS_STRING(msg); - } - #endif - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, cmsg, 1) < 0)) - PyErr_WriteUnraisable(self); - Py_XDECREF(msg);} -#endif - PyObject_GC_Track(self); - } -#endif - } else { - PyObject *res = __Pyx_Coroutine_Close(self); - if (unlikely(!res)) { - if (PyErr_Occurred()) - PyErr_WriteUnraisable(self); - } else { - Py_DECREF(res); - } - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); -#if !CYTHON_USE_TP_FINALIZE - assert(Py_REFCNT(self) > 0); - if (likely(--self->ob_refcnt == 0)) { - return; - } - { - Py_ssize_t refcnt = Py_REFCNT(self); - _Py_NewReference(self); - __Pyx_SET_REFCNT(self, refcnt); - } -#if CYTHON_COMPILING_IN_CPYTHON - assert(PyType_IS_GC(Py_TYPE(self)) && - _Py_AS_GC(self)->gc.gc_refs != _PyGC_REFS_UNTRACKED); - _Py_DEC_REFTOTAL; -#endif -#ifdef COUNT_ALLOCS - --Py_TYPE(self)->tp_frees; - --Py_TYPE(self)->tp_allocs; -#endif -#endif -} -static PyObject * -__Pyx_Coroutine_get_name(__pyx_CoroutineObject *self, void *context) -{ - PyObject *name = self->gi_name; - CYTHON_UNUSED_VAR(context); - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_name(__pyx_CoroutineObject *self, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(self->gi_name, value); - return 0; -} -static PyObject * -__Pyx_Coroutine_get_qualname(__pyx_CoroutineObject *self, void *context) -{ - PyObject *name = self->gi_qualname; - CYTHON_UNUSED_VAR(context); - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_qualname(__pyx_CoroutineObject *self, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(self->gi_qualname, value); - return 0; -} -static PyObject * -__Pyx_Coroutine_get_frame(__pyx_CoroutineObject *self, void *context) -{ - PyObject *frame = self->gi_frame; - CYTHON_UNUSED_VAR(context); - if (!frame) { - if (unlikely(!self->gi_code)) { - Py_RETURN_NONE; - } - frame = (PyObject *) PyFrame_New( - PyThreadState_Get(), /*PyThreadState *tstate,*/ - (PyCodeObject*) self->gi_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (unlikely(!frame)) - return NULL; - self->gi_frame = frame; - } - Py_INCREF(frame); - return frame; -} -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject* type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - __pyx_CoroutineObject *gen = PyObject_GC_New(__pyx_CoroutineObject, type); - if (unlikely(!gen)) - return NULL; - return __Pyx__Coroutine_NewInit(gen, body, code, closure, name, qualname, module_name); -} -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - gen->body = body; - gen->closure = closure; - Py_XINCREF(closure); - gen->is_running = 0; - gen->resume_label = 0; - gen->classobj = NULL; - gen->yieldfrom = NULL; - gen->gi_exc_state.exc_type = NULL; - gen->gi_exc_state.exc_value = NULL; - gen->gi_exc_state.exc_traceback = NULL; -#if CYTHON_USE_EXC_INFO_STACK - gen->gi_exc_state.previous_item = NULL; -#endif - gen->gi_weakreflist = NULL; - Py_XINCREF(qualname); - gen->gi_qualname = qualname; - Py_XINCREF(name); - gen->gi_name = name; - Py_XINCREF(module_name); - gen->gi_modulename = module_name; - Py_XINCREF(code); - gen->gi_code = code; - gen->gi_frame = NULL; - PyObject_GC_Track(gen); - return gen; -} - -/* PatchModuleWithCoroutine */ -static PyObject* __Pyx_Coroutine_patch_module(PyObject* module, const char* py_code) { -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - int result; - PyObject *globals, *result_obj; - globals = PyDict_New(); if (unlikely(!globals)) goto ignore; - result = PyDict_SetItemString(globals, "_cython_coroutine_type", - #ifdef __Pyx_Coroutine_USED - (PyObject*)__pyx_CoroutineType); - #else - Py_None); - #endif - if (unlikely(result < 0)) goto ignore; - result = PyDict_SetItemString(globals, "_cython_generator_type", - #ifdef __Pyx_Generator_USED - (PyObject*)__pyx_GeneratorType); - #else - Py_None); - #endif - if (unlikely(result < 0)) goto ignore; - if (unlikely(PyDict_SetItemString(globals, "_module", module) < 0)) goto ignore; - if (unlikely(PyDict_SetItemString(globals, "__builtins__", __pyx_b) < 0)) goto ignore; - result_obj = PyRun_String(py_code, Py_file_input, globals, globals); - if (unlikely(!result_obj)) goto ignore; - Py_DECREF(result_obj); - Py_DECREF(globals); - return module; -ignore: - Py_XDECREF(globals); - PyErr_WriteUnraisable(module); - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, "Cython module failed to patch module with custom type", 1) < 0)) { - Py_DECREF(module); - module = NULL; - } -#else - py_code++; -#endif - return module; -} - -/* PatchGeneratorABC */ -#ifndef CYTHON_REGISTER_ABCS -#define CYTHON_REGISTER_ABCS 1 -#endif -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) -static PyObject* __Pyx_patch_abc_module(PyObject *module); -static PyObject* __Pyx_patch_abc_module(PyObject *module) { - module = __Pyx_Coroutine_patch_module( - module, "" -"if _cython_generator_type is not None:\n" -" try: Generator = _module.Generator\n" -" except AttributeError: pass\n" -" else: Generator.register(_cython_generator_type)\n" -"if _cython_coroutine_type is not None:\n" -" try: Coroutine = _module.Coroutine\n" -" except AttributeError: pass\n" -" else: Coroutine.register(_cython_coroutine_type)\n" - ); - return module; -} -#endif -static int __Pyx_patch_abc(void) { -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - static int abc_patched = 0; - if (CYTHON_REGISTER_ABCS && !abc_patched) { - PyObject *module; - module = PyImport_ImportModule((PY_MAJOR_VERSION >= 3) ? "collections.abc" : "collections"); - if (unlikely(!module)) { - PyErr_WriteUnraisable(NULL); - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, - ((PY_MAJOR_VERSION >= 3) ? - "Cython module failed to register with collections.abc module" : - "Cython module failed to register with collections module"), 1) < 0)) { - return -1; - } - } else { - module = __Pyx_patch_abc_module(module); - abc_patched = 1; - if (unlikely(!module)) - return -1; - Py_DECREF(module); - } - module = PyImport_ImportModule("backports_abc"); - if (module) { - module = __Pyx_patch_abc_module(module); - Py_XDECREF(module); - } - if (!module) { - PyErr_Clear(); - } - } -#else - if ((0)) __Pyx_Coroutine_patch_module(NULL, NULL); -#endif - return 0; -} - -/* Generator */ -static PyMethodDef __pyx_Generator_methods[] = { - {"send", (PyCFunction) __Pyx_Coroutine_Send, METH_O, - (char*) PyDoc_STR("send(arg) -> send 'arg' into generator,\nreturn next yielded value or raise StopIteration.")}, - {"throw", (PyCFunction) __Pyx_Coroutine_Throw, METH_VARARGS, - (char*) PyDoc_STR("throw(typ[,val[,tb]]) -> raise exception in generator,\nreturn next yielded value or raise StopIteration.")}, - {"close", (PyCFunction) __Pyx_Coroutine_Close_Method, METH_NOARGS, - (char*) PyDoc_STR("close() -> raise GeneratorExit inside generator.")}, - {0, 0, 0, 0} -}; -static PyMemberDef __pyx_Generator_memberlist[] = { - {(char *) "gi_running", T_BOOL, offsetof(__pyx_CoroutineObject, is_running), READONLY, NULL}, - {(char*) "gi_yieldfrom", T_OBJECT, offsetof(__pyx_CoroutineObject, yieldfrom), READONLY, - (char*) PyDoc_STR("object being iterated by 'yield from', or None")}, - {(char*) "gi_code", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_code), READONLY, NULL}, - {(char *) "__module__", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_modulename), 0, 0}, -#if CYTHON_USE_TYPE_SPECS - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CoroutineObject, gi_weakreflist), READONLY, 0}, -#endif - {0, 0, 0, 0, 0} -}; -static PyGetSetDef __pyx_Generator_getsets[] = { - {(char *) "__name__", (getter)__Pyx_Coroutine_get_name, (setter)__Pyx_Coroutine_set_name, - (char*) PyDoc_STR("name of the generator"), 0}, - {(char *) "__qualname__", (getter)__Pyx_Coroutine_get_qualname, (setter)__Pyx_Coroutine_set_qualname, - (char*) PyDoc_STR("qualified name of the generator"), 0}, - {(char *) "gi_frame", (getter)__Pyx_Coroutine_get_frame, NULL, - (char*) PyDoc_STR("Frame of the generator"), 0}, - {0, 0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_GeneratorType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_Coroutine_dealloc}, - {Py_tp_traverse, (void *)__Pyx_Coroutine_traverse}, - {Py_tp_iter, (void *)PyObject_SelfIter}, - {Py_tp_iternext, (void *)__Pyx_Generator_Next}, - {Py_tp_methods, (void *)__pyx_Generator_methods}, - {Py_tp_members, (void *)__pyx_Generator_memberlist}, - {Py_tp_getset, (void *)__pyx_Generator_getsets}, - {Py_tp_getattro, (void *) __Pyx_PyObject_GenericGetAttrNoDict}, -#if CYTHON_USE_TP_FINALIZE - {Py_tp_finalize, (void *)__Pyx_Coroutine_del}, -#endif - {0, 0}, -}; -static PyType_Spec __pyx_GeneratorType_spec = { - __PYX_TYPE_MODULE_PREFIX "generator", - sizeof(__pyx_CoroutineObject), - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE, - __pyx_GeneratorType_slots -}; -#else -static PyTypeObject __pyx_GeneratorType_type = { - PyVarObject_HEAD_INIT(0, 0) - __PYX_TYPE_MODULE_PREFIX "generator", - sizeof(__pyx_CoroutineObject), - 0, - (destructor) __Pyx_Coroutine_dealloc, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE, - 0, - (traverseproc) __Pyx_Coroutine_traverse, - 0, - 0, - offsetof(__pyx_CoroutineObject, gi_weakreflist), - 0, - (iternextfunc) __Pyx_Generator_Next, - __pyx_Generator_methods, - __pyx_Generator_memberlist, - __pyx_Generator_getsets, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if CYTHON_USE_TP_FINALIZE - 0, -#else - __Pyx_Coroutine_del, -#endif - 0, -#if CYTHON_USE_TP_FINALIZE - __Pyx_Coroutine_del, -#elif PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, -#endif -}; -#endif -static int __pyx_Generator_init(PyObject *module) { -#if CYTHON_USE_TYPE_SPECS - __pyx_GeneratorType = __Pyx_FetchCommonTypeFromSpec(module, &__pyx_GeneratorType_spec, NULL); -#else - (void) module; - __pyx_GeneratorType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - __pyx_GeneratorType_type.tp_iter = PyObject_SelfIter; - __pyx_GeneratorType = __Pyx_FetchCommonType(&__pyx_GeneratorType_type); -#endif - if (unlikely(!__pyx_GeneratorType)) { - return -1; - } - return 0; -} - -/* CheckBinaryVersion */ -static int __Pyx_check_binary_version(void) { - char ctversion[4], rtversion[4]; - PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); - if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { - char message[200]; - PyOS_snprintf(message, sizeof(message), - "compile time version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ -#if PY_MAJOR_VERSION >= 3 -static int __Pyx_InitString(__Pyx_StringTabEntry t, PyObject **str) { - if (t.is_unicode | t.is_str) { - if (t.intern) { - *str = PyUnicode_InternFromString(t.s); - } else if (t.encoding) { - *str = PyUnicode_Decode(t.s, t.n - 1, t.encoding, NULL); - } else { - *str = PyUnicode_FromStringAndSize(t.s, t.n - 1); - } - } else { - *str = PyBytes_FromStringAndSize(t.s, t.n - 1); - } - if (!*str) - return -1; - if (PyObject_Hash(*str) == -1) - return -1; - return 0; -} -#endif -#if !CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION >= 3 - __Pyx_InitString(*t, t->p); - #else - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - #endif - ++t; - } - return 0; -} -#endif - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY && !CYTHON_COMPILING_IN_LIMITED_API) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { - __Pyx_TypeName result_type_name = __Pyx_PyType_GetName(Py_TYPE(result)); -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type " __Pyx_FMT_TYPENAME "). " - "The ability to return an instance of a strict subclass of int is deprecated, " - "and may be removed in a future version of Python.", - result_type_name)) { - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; - } - __Pyx_DECREF_TypeName(result_type_name); - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type " __Pyx_FMT_TYPENAME ")", - type_name, type_name, result_type_name); - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -/* #### Code section: utility_code_pragmas_end ### */ -#if _MSV_VER -#pragma warning( pop ) -#endif - - - -/* #### Code section: end ### */ -#endif /* Py_PYTHON_H */ diff --git a/spaces/megamined/voice-gpt/README.md b/spaces/megamined/voice-gpt/README.md deleted file mode 100644 index b433f3d74833f7ad033a8ae57a6911959567b898..0000000000000000000000000000000000000000 --- a/spaces/megamined/voice-gpt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Voice Gpt -emoji: 🏃 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.28.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/merve/anonymization/source/measuring-diversity/columns-height.js b/spaces/merve/anonymization/source/measuring-diversity/columns-height.js deleted file mode 100644 index 3933c17b4bb8abe209b3573bb436c53c47543b1b..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/measuring-diversity/columns-height.js +++ /dev/null @@ -1,177 +0,0 @@ -window.initColumns = function(id, metrics, measures){ - var c = d3.conventions({ - sel: d3.select(id).html('').st({width: 775, margin: '0px auto', left: 27}), - margin: {left: 260, top: 40}, - height: 600, - }) - - var sets = d3.range(numRows).map(i => { - var shapes = columnShapes[i] - shapes = _.sortBy(shapes, d => d.shape) - shapes = _.sortBy(shapes, d => d.size) - shapes = _.sortBy(shapes, d => d.color) - shapes = _.sortBy(shapes, d => d.color == 'green' ? 0 : 1) - - - shapes.nG = d3.sum(shapes, d => d.color == 'green') - shapes.nB = d3.sum(shapes, d => d.color == 'blue') - shapes.nO = d3.sum(shapes, d => d.color == 'orange') - shapes.nR = d3.sum(shapes, d => d.color == 'red') - - shapes.forEach((d, i) => { - d.i = i - d.sizeVal = d.sizeVal < 1 ? .6 : 1 - }) - shapes.i = i - return shapes - }) - - var colW = 200 - var colWpad = 50 - var colH = 20 - var colHpad = 10 - var offsetW = -20 - - var colSel = c.svg.appendMany('g', measures) - .translate((d, i) => [.5 + i*(colW + colWpad) + offsetW, .5]) - - colSel.append('text').text(d => d.ranking_display_text) - .at({y: -20, textAnchor: 'middle', x: colW/2, fontWeight: 600, }) - - var rowSel = colSel.appendMany('g.row', sets) - .translate(d => d.i*(colH + colHpad), 1) - - var colMean = colSel.filter((d, i) => i === 0) - var colMin = colSel.filter((d, i) => i === 1) - var scoreLabelsMean = colMean.selectAll('.row').append('text') - .at({x: -5, y: 15, textAnchor: 'end'}) - .st({fontSize: '13px', opacity: .7}) - var scoreLabelsMin = colMin.selectAll('.row').append('text') - .at({x: 222, y: 15, textAnchor: 'end'}) - .st({fontSize: '13px', opacity: .7}) - - colSel.each(function(d, i){ - d.rowSel = d3.select(this).selectAll('.row') - - c.svg.append('marker') - .attr('id', 'arrow') - .attr('viewBox', '-10 -10 20 20') - .attr('markerWidth', 20) - .attr('markerHeight', 20) - .attr('orient', 'auto') - .append('path') - .attr('d', 'M-6.75,-6.75 L 0,0 L -6.75,6.75') - .at({fill: '#000'}) - - - if (i){ - var pathstr = ['M', 160, -25, 'C', 215, -25, 215, -25, 215, -5].join(' ') - } else{ - var pathstr = ['M', 35, -25, 'C', -20, -25, -20, -25, -20, -5].join(' ') - } - d3.select(this).append('path') - .at({stroke: '#000', fill: 'none', d: pathstr, markerEnd: 'url(#arrow)', strokeWidth: .6}) - }) - - - var s = colH - var p = 2 - - var l0Sel = c.svg.appendMany('path.set', sets).classed('set1', true) - .translate(d => [colW + offsetW, s/2 + .5]) - - drawRow(rowSel) - function drawRow(rowSel){ - rowSel.append('rect.set.no-stroke') - .at({x: -p, y: -p, width: colW + p*2, height: colH + p*2, fill: '#fff'}).classed('set1', true) - - rowSel.appendMany('g', d => d) - .translate(d => [d.i*s + s/2, s/2]) - .each(function(d){ - - var sOffset = 12 - var classNames = [d.shape, d.size, d.color, 'rank-item'].join(' ') - var shapeSel = d3.select(this).append('rect') - .at({ - x: -s/2, - y: -s/2 + (d.size == 'small' ? sOffset/2 : 0) - .5, - width: s - .5, - height: s - (d.size == 'small' ? sOffset : 0), - fill: d.fill, - class: classNames - }) - - if (d.shape == 'triangle'){ - var shapeSel = d3.select(this).append('circle') - .at({r: 2, fill: '#fff', stroke: '#000', strokeWidth: .5, class: classNames}) - } - }) - - } - - var setSel = c.svg.selectAll('.set1') - .on('mouseover', selectSet) - - sets.selected = sets[0] - function selectSet(set){ - sets.selected = set - sets.forEach(d => d.selected = d == set) - setSel - .classed('selected', d => d.selected) - .filter(d => d.selected) - .lower() - - rowSel.classed('selected', d => d.selected) - - sliders.render() - } - - - var sliders = makeSliders(metrics, sets, c, selectSet, drawRow, () => { - sets.forEach(shapes => { - shapes.score = metrics.map(m => { - var v = d3.sum(shapes, (d, i) => shapes[i][m.field] == m.key) - return Math.abs(m.target - v/shapes.length) - }) - }) - - measures.forEach(m => { - sets.forEach(shapes => { - shapes[m.str] = m.fn(shapes.score) - }) - _.sortBy(sets, d => d[m.str] + d.i/10000000)//.reverse() - .forEach((d, i) => d['i' + m.str] = i) - - m.rowSel.translate(d => d['i' + m.str]*(colH + colHpad), 1) - }) - - var p = 0 - l0Sel.at({d: d => [ - 'M', p, d['iUtilitarian']*(colH + colHpad), - 'L', colWpad - p, d['iEgalitarian']*(colH + colHpad), - ].join(' ')}) - - - scoreLabelsMean.text(d => { - return d3.format('.2f')(d['Utilitarian'])// + '%' - }) - scoreLabelsMin.text(d => { - return measures[1].ppFn(d['score']).replace('%', '')// + '%' - }) - }) - - sliders.render() - selectSet(_.sortBy(sets, d => d.iEgalitarian)[0]) -} -window.initColumns('#columns-height', metrics1, measures) -window.initColumns('#columns-height-disagree', metrics2, measures2) - -// Only highlight green items in the second ranking chart. -d3.select('#columns-height-disagree').selectAll('.rank-item').at({opacity: .3}) -d3.select('#columns-height-disagree').selectAll('.green').at({opacity: 1}) - -// Only highlight the green slider in the second ranking chart. -d3.select('#columns-height-disagree').selectAll('.slider').at({opacity: d => { - return d.key !== 'green' ? 0.35: 1 -}}) - diff --git a/spaces/merve/data-leak/source/base-rate/sliders.js b/spaces/merve/data-leak/source/base-rate/sliders.js deleted file mode 100644 index 994c9ba490dc44dfa015553d32ff24e822f16de0..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/base-rate/sliders.js +++ /dev/null @@ -1,103 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - - -var sliderVals = {} - -var sliders = [ - { - key: 'fNoiseMag', - text: 'Feature Noise', - r: [0, 1], - v: .5 - }, - { - key: 'fBiasMag', - text: 'Feature Bias', - r: [0, 1], - v: .2 - }, -] - -!(function(){ - var width = 145 - var height = 30 - - sliders.forEach(d => { - d.s = d3.scaleLinear().domain(d.r).range([0, width]) - sliderVals[d.key] = d - }) - - var sliderSel = d3.select('.slider').html('') - .appendMany('div', sliders) - .at({class: d => d.key}) - .st({ - display: 'inline-block', - width: width, - paddingRight: 60, - marginTop: 20, - color: '#000' - }) - - sliderSel.append('div') - .text(d => d.text) - .st({marginBottom: height/2}) - - var svgSel = sliderSel.append('svg').at({width, height}) - .on('click', function(d){ - d.v = d.s.invert(d3.mouse(this)[0]) - updatePos() - }) - .st({ - cursor: 'pointer' - }) - .append('g').translate(height/2, 1) - svgSel.append('rect').at({width, height, y: -height/2, fill: '#fff'}) - - svgSel.append('path').at({ - d: `M 0 0 H ${width}`, - stroke: '#000', - strokeWidth: 2 - }) - - var drag = d3.drag() - .on('drag', function(d){ - var x = d3.mouse(this)[0] - d.v = d3.clamp(d3.min(d.r), d.s.invert(x), d3.max(d.r)) - - updatePos() - }) - - var circleSel = svgSel.append('circle') - .at({ - r: height/2, - stroke: '#000', - strokeWidth: 2, - fill: '#fff', - }) - .call(drag) - - - function updatePos(){ - circleSel.at({cx: d => d.s(d.v)}) - if (sliderVals.onUpdate) sliderVals.onUpdate() - } - - updatePos() - sliderVals.updatePos = updatePos -})() diff --git a/spaces/merve/measuring-fairness/source/measuring-fairness/slider.js b/spaces/merve/measuring-fairness/source/measuring-fairness/slider.js deleted file mode 100644 index efcbc18387d0d0cb957e34f75bb20a83131dda8e..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/source/measuring-fairness/slider.js +++ /dev/null @@ -1,139 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - - - - - -window.makeSlider = function(){ - - var width = 300 - var height = 30 - - var x = d3.scaleLinear() - .domain([.99, .6]) - .range([0, width]) - .clamp(true) - - var rv = {} - rv.threshold = .5 - rv.setSlider = makeSetSlider(students, 'threshold') - rv.setSliderF = makeSetSlider(students.filter(d => !d.isMale), 'threshold_f') - rv.setSliderM = makeSetSlider(students.filter(d => d.isMale), 'threshold_m') - - var allActiveSel = d3.selectAll('.threshold-rect') - var allHandleSel = d3.selectAll('.threshold-handle') - - var gatedSel = d3.select('.gated') - - function makeSetSlider(data, key){ - var text = key.split('_')[1] - - - var drag = d3.drag() - .on('drag', function(d){ - updateThreshold(x.invert(d3.mouse(this)[0])) - // console.log(d3.event.x) - - if (text && slider.threshold_f && (slider.threshold_f > 0.9042 || slider.threshold_f - slider.threshold_m > .05)){ - gatedSel.classed('opened', 1) - svg.classed('no-blink', 1) - } - - if (key == 'threshold') svg.classed('no-blink', 1) - }) - - var svg = d3.select('.slider.' + key).html('') - .append('svg').at({width, height}) - .call(drag) - .st({cursor: 'pointer'}) - - if (key == 'threshold_m') svg.classed('no-blink', 1) - - - - svg.append('rect').at({width, height, fill: lcolors.well}) - - var rectSel = svg.append('rect.threshold-rect') - .at({width, height, fill: lcolors.sick}) - - var handleSel = svg.append('g.threshold-handle') - handleSel.append('text.cursor') - .text('▲') - .at({textAnchor: 'middle', fontSize: 10, y: height, dy: '.8em'}) - handleSel.append('circle') - .at({cy: height, r: 30, fill: 'rgba(0,0,0,0)'}) - - var labelText = 'Model Aggressiveness _→' - var _replacement = !text ? '' : 'On ' + (text == 'f' ? 'Women ' : 'Men ') - - var labelText = '_Model Aggressiveness →' - var _replacement = !text ? '' : (text == 'f' ? 'Adult ' : 'Adult ') - - var labelText = '_Model Decision Point' - var _replacement = !text ? '' : (text == 'f' ? 'Adult ' : 'Adult ') - - var labelText = 'Model Decision Point_' - var _replacement = !text ? '' : (text == 'f' ? ' for Adults ' : ' for Children ') - - var labelText = '_ Model Aggressiveness →' - var _replacement = !text ? '' : (text == 'f' ? ' Adult ' : 'Child ') - - - svg.append('text.axis').text(labelText.replace('_', _replacement)) - .at({y: height/2, dy: '.33em', dx: 10}) - .st({pointerEvents: 'none'}) - - - - function updateThreshold(threshold, skipDom){ - rv[key] = threshold - data.forEach(d => d.threshold = threshold) - - mini.updateAll() - - rectSel.at({width: x(threshold)}) - handleSel.translate(x(threshold), 0) - - if (skipDom) return - - if (key == 'threshold'){ - allActiveSel.at({width: x(threshold)}) - allHandleSel.translate(x(threshold), 0) - } - - sel.rectSel.at({fill: d => d.grade > d.threshold ? lcolors.sick : lcolors.well}) - sel.textSel - .st({ - strokeWidth: d => d.grade > d.threshold == d.isSick ? 0 : .6, - }) - - } - - return updateThreshold - } - - return rv -} - - - - - - -if (window.init) window.init() diff --git a/spaces/merve/uncertainty-calibration/source/dataset-worldviews/shape-explainer.js b/spaces/merve/uncertainty-calibration/source/dataset-worldviews/shape-explainer.js deleted file mode 100644 index ce184ec2d52346fe3dd5deca774e9f36551ed977..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/dataset-worldviews/shape-explainer.js +++ /dev/null @@ -1,500 +0,0 @@ -console.clear(); - -var shapeScale = 0.6; - -var keyedData = { - pointiness_true: { - name: "pointiness_true", - isRounding: true, - categoryName: "pointiness", - categories: ["pointy", "round"], - textPlacements: {}, - }, - pointiness_false: { - name: "pointiness_false", - isRounding: false, - categoryName: "pointiness", - categories: ["pointy", "round", "other"], - textPlacements: {}, - }, - shape_name_true: { - name: "shape_name_true", - isRounding: true, - categoryName: "shape_name", - categories: ["circle", "triangle", "rect"], - textPlacements: {}, - }, - shape_name_false: { - name: "shape_name_false", - isRounding: false, - categoryName: "shape_name", - categories: ["circle", "triangle", "rect", "other"], - textPlacements: {}, - }, - size_true: { - name: "size_true", - isRounding: true, - categoryName: "size", - categories: ["small", "large"], - textPlacements: {}, - }, - size_false: { - name: "size_false", - isRounding: false, - categoryName: "size", - categories: ["small", "large", "other"], - textPlacements: {}, - }, -}; - -var data = []; -for (var key in keyedData) { - data.push(keyedData[key]); -} - -var state = { - selected: data[0], - selectedTopIndex: 0, - selectedBottomIndex: 0, -}; - -function updateState( - category, - rounding, - topIndex = undefined, - bottomIndex = undefined -) { - var key = category + "_" + rounding; - state.selected = keyedData[key]; - state.selectedTopIndex = topIndex; - state.selectedBottomIndex = bottomIndex; -} - -// Placements for the center labels -var textPlacements = {}; - -var divHeight = 720; -var divWidth = 850; - -var c = d3.conventions({ - sel: d3.select(".shape-explainer").html(""), - width: divWidth, - height: divHeight, - layers: "ds", -}); - -var buttonHeight = 35; -var buttonWidth = 200; -var buttonBuffer = 15; -var topRightShift = 200; -var bottomRightShift = 270; - -function setActiveButton() { - topExplainerButtonSel.classed( - "explainer-active-button", - (d, i) => i == state.selectedTopIndex - ); - bottomExplainerButtonSel.classed( - "explainer-active-button", - (d, i) => i == state.selectedBottomIndex - ); -} - -// Preamble text -c.svg - .append("text.top-explainer-text") - .at({ - textAnchor: "left", - dominantBaseline: "top", - dy: ".33em", - }) - .translate([0, buttonHeight / 2]) - .text("All shapes are basically..."); - -c.svg - .append("text.bottom-explainer-text") - .at({ - textAnchor: "left", - dominantBaseline: "top", - dy: ".33em", - }) - .translate([0, buttonHeight * 1.5 + buttonBuffer]) - .text("Everything else should be labeled..."); - -// Buttons -var topExplainerButtonSel = c.svg - .appendMany("g.explainer-button", ["pointiness", "shape_name", "size"]) - .at({}) - .translate((d, i) => [topRightShift + i * (buttonWidth + buttonBuffer), 0]) - .on("click", function (d, i) { - updateState( - d, - state.selected.isRounding, - (topIndex = i), - (bottomIndex = state.selectedBottomIndex) - ); - setActiveButton(); - moveShapes(); - }); - -topExplainerButtonSel.append("rect").at({ - height: buttonHeight, - width: buttonWidth, - class: "explainer-rect", -}); - -topExplainerButtonSel - .append("text") - .at({ - textAnchor: "middle", - dy: ".33em", - x: buttonWidth / 2, - y: buttonHeight / 2, - class: "dropdown", - }) - .text((d, i) => toShortValueStringDict[d]); - -var bottomExplainerButtonSel = c.svg - .appendMany("g.explainer-button", ["true", "false"]) - .at({}) - .translate((d, i) => [ - bottomRightShift + i * (buttonWidth + buttonBuffer), - buttonHeight + buttonBuffer, - ]) - .on("click", function (d, i) { - updateState( - state.selected.categoryName, - d, - (topIndex = state.selectedTopIndex), - (bottomIndex = i) - ); - setActiveButton(); - moveShapes(); - }); - -bottomExplainerButtonSel.append("rect").at({ - height: buttonHeight, - width: buttonWidth, - class: "explainer-rect", -}); - -bottomExplainerButtonSel - .append("text") - .at({ - textAnchor: "middle", - dy: ".33em", - x: buttonWidth / 2, - y: buttonHeight / 2, - class: "dropdown", - }) - .text((d, i) => toDropdownValueRoundingStringDict[d]); - -var horizontalHeight = divHeight * (5 / 8); -var horizontalBuffer = 50; - -p = d3.line()([ - [horizontalBuffer, horizontalHeight], - [divWidth - horizontalBuffer, horizontalHeight], -]); - -var horizontal = c.svg - .append("path") - .at({ - d: p, - stroke: "black", - strokeWidth: 1, - }) - .translate([0, 0]) - .style("stroke-dasharray", "5, 5"); - - -c.svg - .append("text.label-correct") - .at({ - x: -400, - y: 90, - }) - .text("correctly classified") - .attr("transform", "rotate(-90)"); - -c.svg - .append("text.label-correct") - .at({ - x: -630, - y: 90, - }) - .text("incorrectly classified") - .attr("transform", "rotate(-90)"); - - -// Manually make some small adjustments to where particular shapes are placed -function getFineAdjustment(shape) { - if ( - shape.shape_name == "rt_rect" && - shape.correctness == "incorrect" && - shape.gt == "shaded" - ) { - return 4; - } - if ( - shape.shape_name == "rect" && - shape.correctness == "incorrect" && - shape.gt == "unshaded" - ) { - return -10; - } - if ( - shape.shape_name == "triangle" && - shape.correctness == "incorrect" && - shape.gt == "unshaded" - ) { - return 0; - } - if ( - shape.shape_name == "rt_circle" && - shape.correctness == "incorrect" && - shape.size == "small" - ) { - return -20; - } - if ( - shape.shape_name == "rt_triangle" && - shape.correctness == "incorrect" && - shape.size == "small" - ) { - return -20; - } - return 0; -} - -function getFinalCategory(labelName, isRounding) { - if (isRounding == true) { - return labelName.replace("rt_", ""); - } else { - if (labelName.includes("rt_")) { - return "other"; - } else { - return labelName; - } - } -} - -var startingCorrectHeight = horizontalHeight - 50; -var startingIncorrectHeight = horizontalHeight + 50; -var maxHeight = 180; -var xRowAdjustment = 50; -var heightBuffer = 10; - -function getPathHeight(inputPath) { - var placeholder = c.svg.append("path").at({ - d: scaleShapePath(inputPath, shapeScale), - }); - var height = placeholder.node().getBBox().height; - placeholder.remove(); - return height + heightBuffer; -} - -// Figure out where to put the shapes for all possible placements -function generatePlacements() { - for (selectionCriteria of data) { - // starting X positions - var nCategories = selectionCriteria.categories.length; - var centerX = []; - for (var i = 0; i < nCategories; i++) { - var startingX = divWidth * ((i + 1) / (nCategories + 1)); - centerX.push(startingX); - // Track where each label should be placed using a dictionary in the data - selectionCriteria["textPlacements"][ - selectionCriteria.categories[i] - ] = startingX; - } - - // For keeping of track of how we place items as we go - var locationParams = {}; - for (categoryIdx in selectionCriteria.categories) { - var categoryName = selectionCriteria.categories[categoryIdx]; - locationParams[categoryName] = { - correctX: centerX[categoryIdx], - incorrectX: centerX[categoryIdx], - lastCorrectY: startingCorrectHeight, - lastIncorrectY: startingIncorrectHeight, - }; - } - - for (shape of shapeParams) { - shapeCategory = getFinalCategory( - shape[selectionCriteria.categoryName], - selectionCriteria.isRounding - ); - var shapeHeight = getPathHeight(shape.path); - var shapeX, - shapeY = 0; - if (shape.correctness == "correct") { - shapeY = locationParams[shapeCategory]["lastCorrectY"]; - shapeX = locationParams[shapeCategory]["correctX"]; - // Check if we've reached the maximum height - if ( - startingCorrectHeight - - locationParams[shapeCategory]["lastCorrectY"] >= - maxHeight - ) { - // Reset height to baseline - locationParams[shapeCategory]["lastCorrectY"] = - startingCorrectHeight; - // Move next row over - locationParams[shapeCategory]["correctX"] = - locationParams[shapeCategory]["correctX"] + - xRowAdjustment; - } else { - locationParams[shapeCategory]["lastCorrectY"] += - -1 * shapeHeight; - } - } else { - shapeY = locationParams[shapeCategory]["lastIncorrectY"]; - shapeX = locationParams[shapeCategory]["incorrectX"]; - - if ( - locationParams[shapeCategory]["lastIncorrectY"] - - startingIncorrectHeight >= - maxHeight - ) { - // Reset height to baseline - locationParams[shapeCategory]["lastIncorrectY"] = - startingIncorrectHeight; - // Move next row over - locationParams[shapeCategory]["incorrectX"] = - locationParams[shapeCategory]["incorrectX"] + - xRowAdjustment; - } else { - locationParams[shapeCategory]["lastIncorrectY"] += - shapeHeight; - } - } - shapeY = shapeY + getFineAdjustment(shape); - shape[selectionCriteria.name + "_X"] = shapeX; - shape[selectionCriteria.name + "_Y"] = shapeY; - } - } -} - -generatePlacements(); - -function getLocation(shape) { - return [ - shape[state.selected.name + "_X"], - shape[state.selected.name + "_Y"], - ]; -} - -function scaleShapePath(shapePath, factor = 0.5) { - var newShapePath = ""; - for (var token of shapePath.split(" ")) { - if (parseInt(token)) { - newShapePath = newShapePath + parseInt(token) * factor; - } else { - newShapePath = newShapePath + token; - } - newShapePath = newShapePath + " "; - } - return newShapePath; -} - -// Add the shapes -var explainerShapeSel = c.svg - .appendMany("path.shape", shapeParams) - .at({ - d: (d) => scaleShapePath(d.path, shapeScale), - class: (d) => "gt-" + d.gt + " " + d.correctness, - }) - .translate(function (d) { - return getLocation(d); - }); - -explainerShapeSel.classed("is-classified", true); - -function getColor(d) { - var scaleRowValue = d3.scaleLinear().domain([0.3, 1.0]).range([0, 1]); - return d3.interpolateRdYlGn(scaleRowValue(d)); -} - -// Retrieve the results, for coloring the label boxes -function getResults() { - return calculateResults( - (property = state.selected.categoryName), - (useGuess = state.selected.isRounding) - ); -} - -function getCategoryAccuracy(results, category) { - for (var key of results) { - if (key.rawCategoryName == category) { - return key.accuracy; - } - } -} - -// Rename "large" and "rect" -function toExplainerDisplayString(categoryName) { - if (categoryName == "large") { - return "big"; - } - if (categoryName == "rect") { - return "rectangle"; - } - return categoryName; -} - -function getExplainerTextColor(d, i) { - console.log(d == "large"); - if (d == "large" && state.selected.isRounding == false) { - return "#ffccd8"; - } else { - return "#000000"; - } -} - -function updateText() { - var explainerResults = getResults(); - - d3.selectAll(".explainer-label-text").html(""); - d3.selectAll(".explainer-label-rect").remove(); - - var rectHeight = 30; - var rectWidth = 80; - var textRect = c.svg - .appendMany("rect.column-text-rect", state.selected.categories) - .at({ - fill: (d) => getColor(getCategoryAccuracy(explainerResults, d)), - height: rectHeight, - width: rectWidth, - class: "explainer-label-rect", - }) - .translate((d) => [ - state.selected.textPlacements[d] - rectWidth / 2, - horizontalHeight - rectHeight / 2, - ]); - - var text = c.svg - .appendMany("text.column-text", state.selected.categories) - .at({ - textAnchor: "middle", - dominantBaseline: "central", - class: "explainer-label-text", - }) - .st({ - fill: getExplainerTextColor, - }) - .text((d) => toExplainerDisplayString(d)) - .translate((d) => [state.selected.textPlacements[d], horizontalHeight]); -} - -function moveShapes() { - explainerShapeSel - .transition() - .duration(500) - .translate((d) => getLocation(d)); - updateText(); -} - -setActiveButton(); -updateText(); \ No newline at end of file diff --git a/spaces/mikeee/convbot/app.py b/spaces/mikeee/convbot/app.py deleted file mode 100644 index 5bfb0054567e734f025fe806a9c7e04db730f3f3..0000000000000000000000000000000000000000 --- a/spaces/mikeee/convbot/app.py +++ /dev/null @@ -1,26 +0,0 @@ -"""Run.""" -# pylint: disable=invalid-name -from random import choice -import gradio as gr - -from convbot import convbot - -lost_msg = [ - "I don't follow.", - "Say it agan?", - "Come again?", - "I am afraid I dont't understand.", - "I am lost.", -] - - -def bot(message: str) -> str: - try: - res = convbot(message) - except Exception as exc: - res = f"{choice(lost_msg)} (reason: {exc})" - return res - - -iface = gr.Interface(fn=bot, inputs="text", outputs="text") -iface.launch() diff --git a/spaces/mikeee/multilingual-dokugpt/epub_loader.py b/spaces/mikeee/multilingual-dokugpt/epub_loader.py deleted file mode 100644 index c4619582863b8a20573efa721191358462941d78..0000000000000000000000000000000000000000 --- a/spaces/mikeee/multilingual-dokugpt/epub_loader.py +++ /dev/null @@ -1,38 +0,0 @@ -"""Load an epub file into a list of documents.""" -from dataclasses import dataclass -from pathlib import Path -from typing import List, Union - -from epub2txt import epub2txt -from langchain.docstore.document import Document -from langchain.document_loaders.base import BaseLoader -from loguru import logger - - -@dataclass -class EpubLoader(BaseLoader): - """Load an epub file into a list of documents. - - Args: - file_path: file path or url to epub - Returns: - self.load() -> list of Documents - """ - file_path: Union[str, Path] - - def load(self) -> List[Document]: - """Load data into document objects.""" - try: - texts = epub2txt(self.file_path, outputlist=True) - ch_titles = epub2txt.content_titles - - except Exception as exc: - logger.error(exc) - raise - - docs = [] - for title, text in zip(ch_titles, texts): - metadata = {"source": self.file_path, "ch.": title} - docs.append(Document(page_content=text, metadata=metadata)) - - return docs diff --git a/spaces/mindspore-ai/Wuhan-LuoJiaNET/header.html b/spaces/mindspore-ai/Wuhan-LuoJiaNET/header.html deleted file mode 100644 index 52cf60dc2ceabe06a64bc23772255d71ec6dea4c..0000000000000000000000000000000000000000 --- a/spaces/mindspore-ai/Wuhan-LuoJiaNET/header.html +++ /dev/null @@ -1,27 +0,0 @@ -
    -
    -
    -
    -

    -

    -
    - - -
    \ No newline at end of file diff --git a/spaces/mishtert/tracer/README.md b/spaces/mishtert/tracer/README.md deleted file mode 100644 index c509fa39aea1fcc4a114a31db7be27342b40343b..0000000000000000000000000000000000000000 --- a/spaces/mishtert/tracer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Tracer -emoji: 🐢 -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mrm8488/whisper-large-v3/README.md b/spaces/mrm8488/whisper-large-v3/README.md deleted file mode 100644 index 59c55d6ab57fa8197d3f8eaf8dd07fa22683a37f..0000000000000000000000000000000000000000 --- a/spaces/mrm8488/whisper-large-v3/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Whisper V3 Large Demo -emoji: 🎙️ -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: true -tags: -- whisper-event ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/run_scripts/snli_ve/eval/eval_snli_ve_base_best.sh b/spaces/mshukor/UnIVAL/run_scripts/snli_ve/eval/eval_snli_ve_base_best.sh deleted file mode 100644 index c622ca3c6d921d06409984f2430f7687409c2dd5..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/run_scripts/snli_ve/eval/eval_snli_ve_base_best.sh +++ /dev/null @@ -1,158 +0,0 @@ -#!/usr/bin/env bash - -# The port for communication. Note that if you want to run multiple tasks on the same machine, -# you need to specify different port numbers. -# The port for communication. Note that if you want to run multiple tasks on the same machine, -# you need to specify different port numbers. -# Number of GPUs per GPU worker -export GPUS_PER_NODE=8 -# Number of GPU workers, for single-worker training, please set to 1 -export NUM_NODES=$SLURM_NNODES -# The ip address of the rank-0 worker, for single-worker training, please set to localhost -master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1) -export MASTER_ADDR=$master_addr - -# The port for communication -export MASTER_PORT=12350 -# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0 -export RANK=$SLURM_NODEID - -echo "MASTER_ADDR: $MASTER_ADDR" -echo "RANK :$RANK" -echo "NUM_NODES :$NUM_NODES" -echo "GPUS_PER_NODE :$GPUS_PER_NODE" - -export MIOPEN_USER_DB_PATH=/lus/home/NAT/gda2204/mshukor/.config/miopen_${MASTER_ADDR}_${SLURM_PROCID}/ - -echo "MIOPEN_USER_DB_PATH :$MIOPEN_USER_DB_PATH" - -num_workers=0 - - -exp_name=eval_snli_ve_base_best - - - -ofa_dir=/lus/home/NAT/gda2204/mshukor/code/unival -base_data_dir=/lus/scratch/NAT/gda2204/SHARED/data -base_log_dir=/work/NAT/gda2204/mshukor/logs - - - - -bpe_dir=${ofa_dir}/utils/BPE -user_dir=${ofa_dir}/ofa_module - - -data_dir=${base_data_dir}/ofa/snli_ve_data - -# test or dev -split=dev -read_from_img_path='' #'--read-from-img-path' # '' - -data=${data_dir}/snli_ve_${split}.tsv - -zero_shot='' - - -new_base_log_dir=/lus/scratch/NAT/gda2204/SHARED/logs -# model_name=avg_rata_l0_7snlirefcapvqa -# path=/lus/scratch/NAT/gda2204/SHARED/logs/ofa/pretrained_models/average_models/avg_rata_l0_7snlirefcapvqa.pt - - - -model_name=avg_postratafuse -path=/lus/scratch/NAT/gda2204/SHARED/logs/ofa/pretrained_models/average_models/avg_postratafuse.pt -zero_shot='--zero-shot' - - - -result_path=${new_base_log_dir}/ofa/results/snli_ve/snli_ve_${split}_${model_name} - -mkdir ${result_path} - - -selected_cols=0,2,3,4,5 -valid_batch_size=20 - - -image_dir=${base_data_dir} - - -python3 -m torch.distributed.launch \ - --nnodes=${NUM_NODES} \ - --nproc_per_node=${GPUS_PER_NODE} \ - --master_port=${MASTER_PORT} \ - --node_rank=${RANK} \ - --master_addr=${MASTER_ADDR} \ - --use_env ${ofa_dir}/evaluate.py \ - ${data} \ - --path=${path} \ - --user-dir=${user_dir} \ - --task=snli_ve \ - --batch-size=8 \ - --log-format=simple --log-interval=10 \ - --seed=7 \ - --gen-subset=${split} \ - --results-path=${result_path} \ - --fp16 \ - --num-workers=0 \ - --model-overrides="{\"data\":\"${data}\",\"bpe_dir\":\"${bpe_dir}\",\"selected_cols\":\"${selected_cols}\"}" --image-dir=${image_dir} \ - ${read_from_img_path} \ - ${zero_shot} \ - --prompt-type='prev_output' \ - --strict \ - --noconstraints \ - --patch-image-size=480 \ - -# --ema-eval \ - - - - - -# test or dev -split=test -read_from_img_path='' #'--read-from-img-path' # '' - -data=${data_dir}/snli_ve_${split}.tsv - - - -result_path=${base_log_dir}/ofa/results/snli_ve/snli_ve_${split}_${model_name} -mkdir ${result_path} - - -selected_cols=0,2,3,4,5 -valid_batch_size=20 - - -image_dir=${base_data_dir} - - -python3 -m torch.distributed.launch \ - --nnodes=${NUM_NODES} \ - --nproc_per_node=${GPUS_PER_NODE} \ - --master_port=${MASTER_PORT} \ - --node_rank=${RANK} \ - --master_addr=${MASTER_ADDR} \ - --use_env ${ofa_dir}/evaluate.py \ - ${data} \ - --path=${path} \ - --user-dir=${user_dir} \ - --task=snli_ve \ - --batch-size=8 \ - --log-format=simple --log-interval=10 \ - --seed=7 \ - --gen-subset=${split} \ - --results-path=${result_path} \ - --fp16 \ - --num-workers=0 \ - --model-overrides="{\"data\":\"${data}\",\"bpe_dir\":\"${bpe_dir}\",\"selected_cols\":\"${selected_cols}\"}" --image-dir=${image_dir} \ - ${read_from_img_path} \ - ${zero_shot} \ - --prompt-type='prev_output' \ - --strict \ - --noconstraints - -# --ema-eval \ \ No newline at end of file diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/docker/build-cuda111.sh b/spaces/myrad01/Inpaint-Anything/third_party/lama/docker/build-cuda111.sh deleted file mode 100644 index b0824f5d536f548fde0b1c8e07cc95217d91310d..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/docker/build-cuda111.sh +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/bash - -BASEDIR="$(dirname $0)" - -docker build -t windj007/lama:cuda111 -f "$BASEDIR/Dockerfile-cuda111" "$BASEDIR" diff --git a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/glcontext.py b/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/glcontext.py deleted file mode 100644 index 881df0feca38678d6c075ef85ae65c12875b6b48..0000000000000000000000000000000000000000 --- a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/glcontext.py +++ /dev/null @@ -1,142 +0,0 @@ -"""Headless GPU-accelerated OpenGL context creation on Google Colaboratory. - -Typical usage: - - # Optional PyOpenGL configuratiopn can be done here. - # import OpenGL - # OpenGL.ERROR_CHECKING = True - - # 'glcontext' must be imported before any OpenGL.* API. - from lucid.misc.gl.glcontext import create_opengl_context - - # Now it's safe to import OpenGL and EGL functions - import OpenGL.GL as gl - - # create_opengl_context() creates a GL context that is attached to an - # offscreen surface of the specified size. Note that rendering to buffers - # of other sizes and formats is still possible with OpenGL Framebuffers. - # - # Users are expected to directly use the EGL API in case more advanced - # context management is required. - width, height = 640, 480 - create_opengl_context((width, height)) - - # OpenGL context is available here. - -""" - -from __future__ import print_function - -# pylint: disable=unused-import,g-import-not-at-top,g-statement-before-imports - -try: - import OpenGL -except: - print('This module depends on PyOpenGL.') - print('Please run "\033[1m!pip install -q pyopengl\033[0m" ' - 'prior importing this module.') - raise - -import ctypes -from ctypes import pointer, util -import os - -os.environ['PYOPENGL_PLATFORM'] = 'egl' - -# OpenGL loading workaround. -# -# * PyOpenGL tries to load libGL, but we need libOpenGL, see [1,2]. -# This could have been solved by a symlink libGL->libOpenGL, but: -# -# * Python 2.7 can't find libGL and linEGL due to a bug (see [3]) -# in ctypes.util, that was only wixed in Python 3.6. -# -# So, the only solution I've found is to monkeypatch ctypes.util -# [1] https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/ -# [2] https://devblogs.nvidia.com/linking-opengl-server-side-rendering/ -# [3] https://bugs.python.org/issue9998 -_find_library_old = ctypes.util.find_library -try: - - def _find_library_new(name): - return { - 'GL': 'libOpenGL.so', - 'EGL': 'libEGL.so', - }.get(name, _find_library_old(name)) - util.find_library = _find_library_new - import OpenGL.GL as gl - import OpenGL.EGL as egl - from OpenGL import error - from OpenGL.EGL.EXT.device_base import egl_get_devices - from OpenGL.raw.EGL.EXT.platform_device import EGL_PLATFORM_DEVICE_EXT -except: - print('Unable to load OpenGL libraries. ' - 'Make sure you use GPU-enabled backend.') - print('Press "Runtime->Change runtime type" and set ' - '"Hardware accelerator" to GPU.') - raise -finally: - util.find_library = _find_library_old - -def create_initialized_headless_egl_display(): - """Creates an initialized EGL display directly on a device.""" - for device in egl_get_devices(): - display = egl.eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT, device, None) - - if display != egl.EGL_NO_DISPLAY and egl.eglGetError() == egl.EGL_SUCCESS: - # `eglInitialize` may or may not raise an exception on failure depending - # on how PyOpenGL is configured. We therefore catch a `GLError` and also - # manually check the output of `eglGetError()` here. - try: - initialized = egl.eglInitialize(display, None, None) - except error.GLError: - pass - else: - if initialized == egl.EGL_TRUE and egl.eglGetError() == egl.EGL_SUCCESS: - return display - return egl.EGL_NO_DISPLAY - -def create_opengl_context(surface_size=(640, 480)): - """Create offscreen OpenGL context and make it current. - - Users are expected to directly use EGL API in case more advanced - context management is required. - - Args: - surface_size: (width, height), size of the offscreen rendering surface. - """ - egl_display = create_initialized_headless_egl_display() - if egl_display == egl.EGL_NO_DISPLAY: - raise ImportError('Cannot initialize a headless EGL display.') - - major, minor = egl.EGLint(), egl.EGLint() - egl.eglInitialize(egl_display, pointer(major), pointer(minor)) - - config_attribs = [ - egl.EGL_SURFACE_TYPE, egl.EGL_PBUFFER_BIT, egl.EGL_BLUE_SIZE, 8, - egl.EGL_GREEN_SIZE, 8, egl.EGL_RED_SIZE, 8, egl.EGL_DEPTH_SIZE, 24, - egl.EGL_RENDERABLE_TYPE, egl.EGL_OPENGL_BIT, egl.EGL_NONE - ] - config_attribs = (egl.EGLint * len(config_attribs))(*config_attribs) - - num_configs = egl.EGLint() - egl_cfg = egl.EGLConfig() - egl.eglChooseConfig(egl_display, config_attribs, pointer(egl_cfg), 1, - pointer(num_configs)) - - width, height = surface_size - pbuffer_attribs = [ - egl.EGL_WIDTH, - width, - egl.EGL_HEIGHT, - height, - egl.EGL_NONE, - ] - pbuffer_attribs = (egl.EGLint * len(pbuffer_attribs))(*pbuffer_attribs) - egl_surf = egl.eglCreatePbufferSurface(egl_display, egl_cfg, pbuffer_attribs) - - egl.eglBindAPI(egl.EGL_OPENGL_API) - - egl_context = egl.eglCreateContext(egl_display, egl_cfg, egl.EGL_NO_CONTEXT, - None) - egl.eglMakeCurrent(egl_display, egl_surf, egl_surf, egl_context) diff --git a/spaces/nielsr/swin2sr-image-super-resolution/app.py b/spaces/nielsr/swin2sr-image-super-resolution/app.py deleted file mode 100644 index cec7c26c1dafbb9b1174c18137787f7f7344d508..0000000000000000000000000000000000000000 --- a/spaces/nielsr/swin2sr-image-super-resolution/app.py +++ /dev/null @@ -1,55 +0,0 @@ -import gradio as gr -import requests -from PIL import Image -import os -import torch -import numpy as np -from transformers import AutoImageProcessor, Swin2SRForImageSuperResolution - -torch.hub.download_url_to_file('https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/00003.jpg', '00003.jpg') -torch.hub.download_url_to_file('https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/0855.jpg', '0855.jpg') -torch.hub.download_url_to_file('https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/ali_eye.jpg', 'ali_eye.jpg') -torch.hub.download_url_to_file('https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg', 'butterfly.jpg') -torch.hub.download_url_to_file('https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/chain-eye.jpg', 'chain-eye.jpg') -torch.hub.download_url_to_file('https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/gojou-eyes.jpg', 'gojou-eyes.jpg') -torch.hub.download_url_to_file('https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/shanghai.jpg', 'shanghai.jpg') -torch.hub.download_url_to_file('https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/vagabond.jpg', 'vagabond.jpg') - -processor = AutoImageProcessor.from_pretrained("caidas/swin2SR-classical-sr-x2-64") -model = Swin2SRForImageSuperResolution.from_pretrained("caidas/swin2SR-classical-sr-x2-64") - -def enhance(image): - # prepare image for the model - inputs = processor(image, return_tensors="pt") - - # forward pass - with torch.no_grad(): - outputs = model(**inputs) - - # postprocess - output = outputs.reconstruction.data.squeeze().float().cpu().clamp_(0, 1).numpy() - output = np.moveaxis(output, source=0, destination=-1) - output = (output * 255.0).round().astype(np.uint8) # float32 to uint8 - - return Image.fromarray(output) - -title = "Demo: Swin2SR for Image Super-Resolution 🚀🚀🔥" -description = ''' - -**This demo expects low-quality and low-resolution JPEG compressed images.** - -**Demo notebook can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Swin2SR/Perform_image_super_resolution_with_Swin2SR.ipynb). -''' -article = "

    Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration | HuggingFace docs

    " - -examples = [['00003.jpg'], ['0855.jpg'], ['ali_eye.jpg'], ['butterfly.jpg'], ['chain-eye.jpg'], ['gojou-eyes.jpg'], ['shanghai.jpg'], ['vagabond.jpg']] - -gr.Interface( - enhance, - gr.inputs.Image(type="pil", label="Input").style(height=260), - gr.inputs.Image(type="pil", label="Ouput").style(height=240), - title=title, - description=description, - article=article, - examples=examples, - ).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/niizam/sovits-models/utils.py b/spaces/niizam/sovits-models/utils.py deleted file mode 100644 index e19cac39c57f213bbf6f1435ab48fe7948a1b17b..0000000000000000000000000000000000000000 --- a/spaces/niizam/sovits-models/utils.py +++ /dev/null @@ -1,501 +0,0 @@ -import os -import glob -import re -import sys -import argparse -import logging -import json -import subprocess -import random - -import librosa -import numpy as np -from scipy.io.wavfile import read -import torch -from torch.nn import functional as F -from modules.commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -# def normalize_f0(f0, random_scale=True): -# f0_norm = f0.clone() # create a copy of the input Tensor -# batch_size, _, frame_length = f0_norm.shape -# for i in range(batch_size): -# means = torch.mean(f0_norm[i, 0, :]) -# if random_scale: -# factor = random.uniform(0.8, 1.2) -# else: -# factor = 1 -# f0_norm[i, 0, :] = (f0_norm[i, 0, :] - means) * factor -# return f0_norm -# def normalize_f0(f0, random_scale=True): -# means = torch.mean(f0[:, 0, :], dim=1, keepdim=True) -# if random_scale: -# factor = torch.Tensor(f0.shape[0],1).uniform_(0.8, 1.2).to(f0.device) -# else: -# factor = torch.ones(f0.shape[0], 1, 1).to(f0.device) -# f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) -# return f0_norm -def normalize_f0(f0, x_mask, uv, random_scale=True): - # calculate means based on x_mask - uv_sum = torch.sum(uv, dim=1, keepdim=True) - uv_sum[uv_sum == 0] = 9999 - means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum - - if random_scale: - factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device) - else: - factor = torch.ones(f0.shape[0], 1).to(f0.device) - # normalize f0 based on means and factor - f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) - if torch.isnan(f0_norm).any(): - exit(0) - return f0_norm * x_mask - - -def plot_data_to_numpy(x, y): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - plt.plot(x) - plt.plot(y) - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - - -def interpolate_f0(f0): - ''' - 对F0进行插值处理 - ''' - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] - last_value = data[i] - - return ip_data[:,0], vuv_vector[:,0] - - -def compute_f0_parselmouth(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import parselmouth - x = wav_numpy - if p_len is None: - p_len = x.shape[0]//hop_length - else: - assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error" - time_step = hop_length / sampling_rate * 1000 - f0_min = 50 - f0_max = 1100 - f0 = parselmouth.Sound(x, sampling_rate).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - return f0 - -def resize_f0(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0_dio(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import pyworld - if p_len is None: - p_len = wav_numpy.shape[0]//hop_length - f0, t = pyworld.dio( - wav_numpy.astype(np.double), - fs=sampling_rate, - f0_ceil=800, - frame_period=1000 * hop_length / sampling_rate, - ) - f0 = pyworld.stonemask(wav_numpy.astype(np.double), f0, t, sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return resize_f0(f0, p_len) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(): - vec_path = "hubert/checkpoint_best_legacy_500.pt" - print("load model(s) from {}".format(vec_path)) - from fairseq import checkpoint_utils - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - model = models[0] - model.eval() - return model - -def get_hubert_content(hmodel, wav_16k_tensor): - feats = wav_16k_tensor - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav_16k_tensor.device), - "padding_mask": padding_mask.to(wav_16k_tensor.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = hmodel.extract_features(**inputs) - feats = hmodel.final_proj(logits[0]) - return feats.transpose(1, 2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - # assert "dec" in k or "disc" in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -def repeat_expand_2d(content, target_len): - # content : [h, t] - - src_len = content.shape[-1] - target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device) - temp = torch.arange(src_len+1) * target_len / src_len - current_pos = 0 - for i in range(target_len): - if i < temp[current_pos+1]: - target[:, i] = content[:, current_pos] - else: - current_pos += 1 - target[:, i] = content[:, current_pos] - - return target - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py deleted file mode 100644 index b3b4d1c5663fb49b2fc40752d6b7a42eddd58e75..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python3 -# Copyright 2004-present Facebook. All Rights Reserved. - -import numpy as np -from typing import List - -from detectron2.config import CfgNode as CfgNode_ -from detectron2.config import configurable - -from .base_tracker import TRACKER_HEADS_REGISTRY -from .vanilla_hungarian_bbox_iou_tracker import VanillaHungarianBBoxIOUTracker - - -@TRACKER_HEADS_REGISTRY.register() -class IOUWeightedHungarianBBoxIOUTracker(VanillaHungarianBBoxIOUTracker): - """ - A tracker using IoU as weight in Hungarian algorithm, also known - as Munkres or Kuhn-Munkres algorithm - """ - - @configurable - def __init__( - self, - *, - video_height: int, - video_width: int, - max_num_instances: int = 200, - max_lost_frame_count: int = 0, - min_box_rel_dim: float = 0.02, - min_instance_period: int = 1, - track_iou_threshold: float = 0.5, - **kwargs, - ): - """ - Args: - video_height: height the video frame - video_width: width of the video frame - max_num_instances: maximum number of id allowed to be tracked - max_lost_frame_count: maximum number of frame an id can lost tracking - exceed this number, an id is considered as lost - forever - min_box_rel_dim: a percentage, smaller than this dimension, a bbox is - removed from tracking - min_instance_period: an instance will be shown after this number of period - since its first showing up in the video - track_iou_threshold: iou threshold, below this number a bbox pair is removed - from tracking - """ - super().__init__( - video_height=video_height, - video_width=video_width, - max_num_instances=max_num_instances, - max_lost_frame_count=max_lost_frame_count, - min_box_rel_dim=min_box_rel_dim, - min_instance_period=min_instance_period, - track_iou_threshold=track_iou_threshold, - ) - - @classmethod - def from_config(cls, cfg: CfgNode_): - """ - Old style initialization using CfgNode - - Args: - cfg: D2 CfgNode, config file - Return: - dictionary storing arguments for __init__ method - """ - assert "VIDEO_HEIGHT" in cfg.TRACKER_HEADS - assert "VIDEO_WIDTH" in cfg.TRACKER_HEADS - video_height = cfg.TRACKER_HEADS.get("VIDEO_HEIGHT") - video_width = cfg.TRACKER_HEADS.get("VIDEO_WIDTH") - max_num_instances = cfg.TRACKER_HEADS.get("MAX_NUM_INSTANCES", 200) - max_lost_frame_count = cfg.TRACKER_HEADS.get("MAX_LOST_FRAME_COUNT", 0) - min_box_rel_dim = cfg.TRACKER_HEADS.get("MIN_BOX_REL_DIM", 0.02) - min_instance_period = cfg.TRACKER_HEADS.get("MIN_INSTANCE_PERIOD", 1) - track_iou_threshold = cfg.TRACKER_HEADS.get("TRACK_IOU_THRESHOLD", 0.5) - return { - "_target_": "detectron2.tracking.iou_weighted_hungarian_bbox_iou_tracker.IOUWeightedHungarianBBoxIOUTracker", # noqa - "video_height": video_height, - "video_width": video_width, - "max_num_instances": max_num_instances, - "max_lost_frame_count": max_lost_frame_count, - "min_box_rel_dim": min_box_rel_dim, - "min_instance_period": min_instance_period, - "track_iou_threshold": track_iou_threshold, - } - - def assign_cost_matrix_values(self, cost_matrix: np.ndarray, bbox_pairs: List) -> np.ndarray: - """ - Based on IoU for each pair of bbox, assign the associated value in cost matrix - - Args: - cost_matrix: np.ndarray, initialized 2D array with target dimensions - bbox_pairs: list of bbox pair, in each pair, iou value is stored - Return: - np.ndarray, cost_matrix with assigned values - """ - for pair in bbox_pairs: - # assign (-1 * IoU) for above threshold pairs, algorithms will minimize cost - cost_matrix[pair["idx"]][pair["prev_idx"]] = -1 * pair["IoU"] - return cost_matrix diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/data/datasets/builtin.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/data/datasets/builtin.py deleted file mode 100644 index 7572cd6abc550fdce9d1fd079a7af4870de303bb..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/data/datasets/builtin.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .chimpnsee import register_dataset as register_chimpnsee_dataset -from .coco import BASE_DATASETS as BASE_COCO_DATASETS -from .coco import DATASETS as COCO_DATASETS -from .coco import register_datasets as register_coco_datasets -from .lvis import DATASETS as LVIS_DATASETS -from .lvis import register_datasets as register_lvis_datasets - -DEFAULT_DATASETS_ROOT = "datasets" - - -register_coco_datasets(COCO_DATASETS, DEFAULT_DATASETS_ROOT) -register_coco_datasets(BASE_COCO_DATASETS, DEFAULT_DATASETS_ROOT) -register_lvis_datasets(LVIS_DATASETS, DEFAULT_DATASETS_ROOT) - -register_chimpnsee_dataset(DEFAULT_DATASETS_ROOT) # pyre-ignore[19] diff --git a/spaces/nomic-ai/csebuetnlp_xlsum/README.md b/spaces/nomic-ai/csebuetnlp_xlsum/README.md deleted file mode 100644 index 1796feef4bf74bd305189e9e648ececd4d913f49..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/csebuetnlp_xlsum/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: csebuetnlp/xlsum -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- diff --git a/spaces/nomic-ai/kunishou_databricks-dolly-15k-ja/README.md b/spaces/nomic-ai/kunishou_databricks-dolly-15k-ja/README.md deleted file mode 100644 index 9b1a7777447ce59a9073e5a24fec0ac82d766acf..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/kunishou_databricks-dolly-15k-ja/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: kunishou/databricks-dolly-15k-ja -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false -duplicated_from: nomic-ai/Dahoas_full-hh-rlhf ---- diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/layers/read_array_ifstream.h b/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/layers/read_array_ifstream.h deleted file mode 100644 index 3ea2bd1375435cc316e18c619767334e80040ac1..0000000000000000000000000000000000000000 --- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/layers/read_array_ifstream.h +++ /dev/null @@ -1,66 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -// Low-level array reading function using std::ifstream. - -#ifndef LYRA_CODEC_SPARSE_MATMUL_LAYERS_READ_ARRAY_IFSTREAM_H_ -#define LYRA_CODEC_SPARSE_MATMUL_LAYERS_READ_ARRAY_IFSTREAM_H_ - -#include -#include -#include -#include - -#include "absl/status/status.h" -#include "absl/strings/substitute.h" -#include "include/ghc/filesystem.hpp" - -namespace csrblocksparse { -namespace detail { - -template -absl::Status ReadArrayIfstream(const std::string& file_name, - const std::string& path, std::vector* array, - int64_t* length) { - ghc::filesystem::path complete_path(path); - complete_path /= file_name; - std::ifstream in_stream(complete_path.u8string(), std::ios::binary); - if (!in_stream.is_open()) { - return absl::UnknownError( - absl::Substitute("Error opening $0", complete_path.string())); - } - - std::stringstream buffer; - buffer << in_stream.rdbuf(); - if (buffer.str().empty()) { - LOG(ERROR) << "File " << complete_path << " was empty."; - return absl::UnknownError( - absl::Substitute("File $0 was empty", complete_path.string())); - } - std::string contents = buffer.str(); - *length = contents.length(); - int64_t elem = (*length + sizeof(T) - 1) / sizeof(T); - array->resize(elem); - std::move(contents.begin(), contents.end(), - reinterpret_cast(array->data())); - - return absl::OkStatus(); -} - -} // namespace detail -} // namespace csrblocksparse - -#endif // LYRA_CODEC_SPARSE_MATMUL_LAYERS_READ_ARRAY_IFSTREAM_H_ diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/models/utils/RAFT/utils/utils.py b/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/models/utils/RAFT/utils/utils.py deleted file mode 100644 index 5f32d281c1c46353a0a2bf36b0550adb74125c65..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/models/utils/RAFT/utils/utils.py +++ /dev/null @@ -1,82 +0,0 @@ -import torch -import torch.nn.functional as F -import numpy as np -from scipy import interpolate - - -class InputPadder: - """ Pads images such that dimensions are divisible by 8 """ - def __init__(self, dims, mode='sintel'): - self.ht, self.wd = dims[-2:] - pad_ht = (((self.ht // 8) + 1) * 8 - self.ht) % 8 - pad_wd = (((self.wd // 8) + 1) * 8 - self.wd) % 8 - if mode == 'sintel': - self._pad = [pad_wd//2, pad_wd - pad_wd//2, pad_ht//2, pad_ht - pad_ht//2] - else: - self._pad = [pad_wd//2, pad_wd - pad_wd//2, 0, pad_ht] - - def pad(self, *inputs): - return [F.pad(x, self._pad, mode='replicate') for x in inputs] - - def unpad(self,x): - ht, wd = x.shape[-2:] - c = [self._pad[2], ht-self._pad[3], self._pad[0], wd-self._pad[1]] - return x[..., c[0]:c[1], c[2]:c[3]] - -def forward_interpolate(flow): - flow = flow.detach().cpu().numpy() - dx, dy = flow[0], flow[1] - - ht, wd = dx.shape - x0, y0 = np.meshgrid(np.arange(wd), np.arange(ht)) - - x1 = x0 + dx - y1 = y0 + dy - - x1 = x1.reshape(-1) - y1 = y1.reshape(-1) - dx = dx.reshape(-1) - dy = dy.reshape(-1) - - valid = (x1 > 0) & (x1 < wd) & (y1 > 0) & (y1 < ht) - x1 = x1[valid] - y1 = y1[valid] - dx = dx[valid] - dy = dy[valid] - - flow_x = interpolate.griddata( - (x1, y1), dx, (x0, y0), method='nearest', fill_value=0) - - flow_y = interpolate.griddata( - (x1, y1), dy, (x0, y0), method='nearest', fill_value=0) - - flow = np.stack([flow_x, flow_y], axis=0) - return torch.from_numpy(flow).float() - - -def bilinear_sampler(img, coords, mode='bilinear', mask=False): - """ Wrapper for grid_sample, uses pixel coordinates """ - H, W = img.shape[-2:] - xgrid, ygrid = coords.split([1,1], dim=-1) - xgrid = 2*xgrid/(W-1) - 1 - ygrid = 2*ygrid/(H-1) - 1 - - grid = torch.cat([xgrid, ygrid], dim=-1) - img = F.grid_sample(img, grid, align_corners=True) - - if mask: - mask = (xgrid > -1) & (ygrid > -1) & (xgrid < 1) & (ygrid < 1) - return img, mask.float() - - return img - - -def coords_grid(batch, ht, wd): - coords = torch.meshgrid(torch.arange(ht), torch.arange(wd)) - coords = torch.stack(coords[::-1], dim=0).float() - return coords[None].repeat(batch, 1, 1, 1) - - -def upflow8(flow, mode='bilinear'): - new_size = (8 * flow.shape[2], 8 * flow.shape[3]) - return 8 * F.interpolate(flow, size=new_size, mode=mode, align_corners=True) diff --git a/spaces/osanseviero/gradio_auth/README.md b/spaces/osanseviero/gradio_auth/README.md deleted file mode 100644 index 7ce70fb80536a0cf95688ab19bade847bcfe50df..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/gradio_auth/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gradio Auth -emoji: 🐢 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/pix2pix.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/pix2pix.md deleted file mode 100644 index f921922e4bb58442e4860a10264507b18fd14f78..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/pix2pix.md +++ /dev/null @@ -1,46 +0,0 @@ - - -# InstructPix2Pix - -[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://huggingface.co/papers/2211.09800) is by Tim Brooks, Aleksander Holynski and Alexei A. Efros. - -The abstract from the paper is: - -*We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.* - -You can find additional information about InstructPix2Pix on the [project page](https://www.timothybrooks.com/instruct-pix2pix), [original codebase](https://github.com/timothybrooks/instruct-pix2pix), and try it out in a [demo](https://huggingface.co/spaces/timbrooks/instruct-pix2pix). - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## StableDiffusionInstructPix2PixPipeline -[[autodoc]] StableDiffusionInstructPix2PixPipeline - - __call__ - - all - - load_textual_inversion - - load_lora_weights - - save_lora_weights - -## StableDiffusionPipelineOutput -[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput - -## StableDiffusionXLInstructPix2PixPipeline -[[autodoc]] StableDiffusionXLInstructPix2PixPipeline - - __call__ - - all - -## StableDiffusionXLPipelineOutput -[[autodoc]] pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput \ No newline at end of file diff --git a/spaces/passaglia/yomikata-demo/yomikata/reader.py b/spaces/passaglia/yomikata-demo/yomikata/reader.py deleted file mode 100644 index f0ed1bff204c0c25b23e48e2757849de9ed4d51e..0000000000000000000000000000000000000000 --- a/spaces/passaglia/yomikata-demo/yomikata/reader.py +++ /dev/null @@ -1,19 +0,0 @@ -""" reader.py -An abstract class for assigning readings to Japanese sentences. -""" -import abc - - -class Reader(abc.ABC): - @abc.abstractmethod - def furigana(self, text: str) -> str: - """Add furigana to Japanese text - - Args: - text (str): a sentence in Japanese - - Returns: - str: sentence annotated with furigana - - """ - pass diff --git a/spaces/pierreant-p/huggingfab/index.html b/spaces/pierreant-p/huggingfab/index.html deleted file mode 100644 index 52c24a0b5de865b60bbe0fb4647fe826fee30b5d..0000000000000000000000000000000000000000 --- a/spaces/pierreant-p/huggingfab/index.html +++ /dev/null @@ -1,20 +0,0 @@ - - - - - - - HuggingFab: A HuggingFace + Sketchfab Experiment - - - - - - - - - - - - - diff --git a/spaces/pinkq/Newbing/src/lib/hooks/use-bing.ts b/spaces/pinkq/Newbing/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/pinkq/Newbing/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/pkiage/credit_risk_modeling_demo/common/data.py b/spaces/pkiage/credit_risk_modeling_demo/common/data.py deleted file mode 100644 index a230b652c52e3b4e35b94283d4d711352f5a3868..0000000000000000000000000000000000000000 --- a/spaces/pkiage/credit_risk_modeling_demo/common/data.py +++ /dev/null @@ -1,94 +0,0 @@ -from typing import List, Union, cast -from dataclasses import dataclass -from sklearn.model_selection import train_test_split -import pandas as pd - -from common.util import drop_columns - - -@dataclass -class SplitDataset: - X_test: pd.DataFrame - X_train: pd.DataFrame - y_test: pd.Series - y_train: pd.Series - - @property - def X_y_test(self) -> pd.DataFrame: - return pd.concat( - cast( - List[Union[pd.DataFrame, pd.Series]], - [ - self.X_test.reset_index(drop=True), - self.y_test.reset_index(drop=True), - ], - ), - axis=1, - ) - - @property - def X_y_train(self) -> pd.DataFrame: - return pd.concat( - cast( - List[Union[pd.DataFrame, pd.Series]], - [ - self.X_train.reset_index(drop=True), - self.y_train.reset_index(drop=True), - ], - ), - axis=1, - ) - - -@dataclass -class Dataset: - df: pd.DataFrame - random_state: int - test_size: int - - @property - def y_value(self) -> pd.DataFrame: - return self.df["loan_status"] - - @property - def x_values(self) -> pd.DataFrame: - return cast( - pd.DataFrame, - drop_columns( - self.df, - [ - "loan_status", - "loan_grade_A", - "loan_grade_B", - "loan_grade_C", - "loan_grade_D", - "loan_grade_E", - "loan_grade_F", - "loan_grade_G", - ], - ), - ) - - @property - def x_values_column_names(self): - return self.x_values.columns.tolist() - - def x_values_filtered_columns(self, columns: List[str]) -> pd.DataFrame: - return self.df.filter(columns) - - def train_test_split( - self, selected_x_values: pd.DataFrame - ) -> SplitDataset: - X_train, X_test, y_train, y_test = train_test_split( - selected_x_values, - self.y_value, - test_size=self.test_size / 100, # since up was given as pct - random_state=self.random_state, - ) - - return SplitDataset( - X_train=cast(pd.DataFrame, X_train), - X_test=cast(pd.DataFrame, X_test), - y_train=cast(pd.Series, y_train), - y_test=cast(pd.Series, y_test), - ) diff --git a/spaces/pkiage/credit_risk_modeling_demo/views/__init__.py b/spaces/pkiage/credit_risk_modeling_demo/views/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/main.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/main.py deleted file mode 100644 index 7e061f5b39081f39e9f4fa2a0e88aec0e0a3da79..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/main.py +++ /dev/null @@ -1,79 +0,0 @@ -"""Primary application entrypoint. -""" -import locale -import logging -import os -import sys -import warnings -from typing import List, Optional - -from pip._internal.cli.autocompletion import autocomplete -from pip._internal.cli.main_parser import parse_command -from pip._internal.commands import create_command -from pip._internal.exceptions import PipError -from pip._internal.utils import deprecation - -logger = logging.getLogger(__name__) - - -# Do not import and use main() directly! Using it directly is actively -# discouraged by pip's maintainers. The name, location and behavior of -# this function is subject to change, so calling it directly is not -# portable across different pip versions. - -# In addition, running pip in-process is unsupported and unsafe. This is -# elaborated in detail at -# https://pip.pypa.io/en/stable/user_guide/#using-pip-from-your-program. -# That document also provides suggestions that should work for nearly -# all users that are considering importing and using main() directly. - -# However, we know that certain users will still want to invoke pip -# in-process. If you understand and accept the implications of using pip -# in an unsupported manner, the best approach is to use runpy to avoid -# depending on the exact location of this entry point. - -# The following example shows how to use runpy to invoke pip in that -# case: -# -# sys.argv = ["pip", your, args, here] -# runpy.run_module("pip", run_name="__main__") -# -# Note that this will exit the process after running, unlike a direct -# call to main. As it is not safe to do any processing after calling -# main, this should not be an issue in practice. - - -def main(args: Optional[List[str]] = None) -> int: - if args is None: - args = sys.argv[1:] - - # Suppress the pkg_resources deprecation warning - # Note - we use a module of .*pkg_resources to cover - # the normal case (pip._vendor.pkg_resources) and the - # devendored case (a bare pkg_resources) - warnings.filterwarnings( - action="ignore", category=DeprecationWarning, module=".*pkg_resources" - ) - - # Configure our deprecation warnings to be sent through loggers - deprecation.install_warning_logger() - - autocomplete() - - try: - cmd_name, cmd_args = parse_command(args) - except PipError as exc: - sys.stderr.write(f"ERROR: {exc}") - sys.stderr.write(os.linesep) - sys.exit(1) - - # Needed for locale.getpreferredencoding(False) to work - # in pip._internal.utils.encoding.auto_decode - try: - locale.setlocale(locale.LC_ALL, "") - except locale.Error as e: - # setlocale can apparently crash if locale are uninitialized - logger.debug("Ignoring error %s when setting locale", e) - command = create_command(cmd_name, isolated=("--isolated" in cmd_args)) - - return command.main(cmd_args) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/colorama/winterm.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/colorama/winterm.py deleted file mode 100644 index aad867e8c80b826bf6a060116f17fa08a8eb0765..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/colorama/winterm.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -try: - from msvcrt import get_osfhandle -except ImportError: - def get_osfhandle(_): - raise OSError("This isn't windows!") - - -from . import win32 - -# from wincon.h -class WinColor(object): - BLACK = 0 - BLUE = 1 - GREEN = 2 - CYAN = 3 - RED = 4 - MAGENTA = 5 - YELLOW = 6 - GREY = 7 - -# from wincon.h -class WinStyle(object): - NORMAL = 0x00 # dim text, dim background - BRIGHT = 0x08 # bright text, dim background - BRIGHT_BACKGROUND = 0x80 # dim text, bright background - -class WinTerm(object): - - def __init__(self): - self._default = win32.GetConsoleScreenBufferInfo(win32.STDOUT).wAttributes - self.set_attrs(self._default) - self._default_fore = self._fore - self._default_back = self._back - self._default_style = self._style - # In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style. - # So that LIGHT_EX colors and BRIGHT style do not clobber each other, - # we track them separately, since LIGHT_EX is overwritten by Fore/Back - # and BRIGHT is overwritten by Style codes. - self._light = 0 - - def get_attrs(self): - return self._fore + self._back * 16 + (self._style | self._light) - - def set_attrs(self, value): - self._fore = value & 7 - self._back = (value >> 4) & 7 - self._style = value & (WinStyle.BRIGHT | WinStyle.BRIGHT_BACKGROUND) - - def reset_all(self, on_stderr=None): - self.set_attrs(self._default) - self.set_console(attrs=self._default) - self._light = 0 - - def fore(self, fore=None, light=False, on_stderr=False): - if fore is None: - fore = self._default_fore - self._fore = fore - # Emulate LIGHT_EX with BRIGHT Style - if light: - self._light |= WinStyle.BRIGHT - else: - self._light &= ~WinStyle.BRIGHT - self.set_console(on_stderr=on_stderr) - - def back(self, back=None, light=False, on_stderr=False): - if back is None: - back = self._default_back - self._back = back - # Emulate LIGHT_EX with BRIGHT_BACKGROUND Style - if light: - self._light |= WinStyle.BRIGHT_BACKGROUND - else: - self._light &= ~WinStyle.BRIGHT_BACKGROUND - self.set_console(on_stderr=on_stderr) - - def style(self, style=None, on_stderr=False): - if style is None: - style = self._default_style - self._style = style - self.set_console(on_stderr=on_stderr) - - def set_console(self, attrs=None, on_stderr=False): - if attrs is None: - attrs = self.get_attrs() - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleTextAttribute(handle, attrs) - - def get_position(self, handle): - position = win32.GetConsoleScreenBufferInfo(handle).dwCursorPosition - # Because Windows coordinates are 0-based, - # and win32.SetConsoleCursorPosition expects 1-based. - position.X += 1 - position.Y += 1 - return position - - def set_cursor_position(self, position=None, on_stderr=False): - if position is None: - # I'm not currently tracking the position, so there is no default. - # position = self.get_position() - return - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleCursorPosition(handle, position) - - def cursor_adjust(self, x, y, on_stderr=False): - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - position = self.get_position(handle) - adjusted_position = (position.Y + y, position.X + x) - win32.SetConsoleCursorPosition(handle, adjusted_position, adjust=False) - - def erase_screen(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the screen. - # 1 should clear from the cursor to the beginning of the screen. - # 2 should clear the entire screen, and move cursor to (1,1) - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - # get the number of character cells in the current buffer - cells_in_screen = csbi.dwSize.X * csbi.dwSize.Y - # get number of character cells before current cursor position - cells_before_cursor = csbi.dwSize.X * csbi.dwCursorPosition.Y + csbi.dwCursorPosition.X - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = cells_in_screen - cells_before_cursor - elif mode == 1: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_before_cursor - elif mode == 2: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_in_screen - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - if mode == 2: - # put the cursor where needed - win32.SetConsoleCursorPosition(handle, (1, 1)) - - def erase_line(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the line. - # 1 should clear from the cursor to the beginning of the line. - # 2 should clear the entire line. - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = csbi.dwSize.X - csbi.dwCursorPosition.X - elif mode == 1: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwCursorPosition.X - elif mode == 2: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwSize.X - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - - def set_title(self, title): - win32.SetConsoleTitle(title) - - -def enable_vt_processing(fd): - if win32.windll is None or not win32.winapi_test(): - return False - - try: - handle = get_osfhandle(fd) - mode = win32.GetConsoleMode(handle) - win32.SetConsoleMode( - handle, - mode | win32.ENABLE_VIRTUAL_TERMINAL_PROCESSING, - ) - - mode = win32.GetConsoleMode(handle) - if mode & win32.ENABLE_VIRTUAL_TERMINAL_PROCESSING: - return True - # Can get TypeError in testsuite where 'fd' is a Mock() - except (OSError, TypeError): - return False diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/pretty.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/pretty.py deleted file mode 100644 index 2bd9eb0073d3e0a6c56311b42097ff322f75dcdd..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/pretty.py +++ /dev/null @@ -1,994 +0,0 @@ -import builtins -import collections -import dataclasses -import inspect -import os -import sys -from array import array -from collections import Counter, UserDict, UserList, defaultdict, deque -from dataclasses import dataclass, fields, is_dataclass -from inspect import isclass -from itertools import islice -from types import MappingProxyType -from typing import ( - TYPE_CHECKING, - Any, - Callable, - DefaultDict, - Dict, - Iterable, - List, - Optional, - Sequence, - Set, - Tuple, - Union, -) - -from pip._vendor.rich.repr import RichReprResult - -try: - import attr as _attr_module - - _has_attrs = hasattr(_attr_module, "ib") -except ImportError: # pragma: no cover - _has_attrs = False - -from . import get_console -from ._loop import loop_last -from ._pick import pick_bool -from .abc import RichRenderable -from .cells import cell_len -from .highlighter import ReprHighlighter -from .jupyter import JupyterMixin, JupyterRenderable -from .measure import Measurement -from .text import Text - -if TYPE_CHECKING: - from .console import ( - Console, - ConsoleOptions, - HighlighterType, - JustifyMethod, - OverflowMethod, - RenderResult, - ) - - -def _is_attr_object(obj: Any) -> bool: - """Check if an object was created with attrs module.""" - return _has_attrs and _attr_module.has(type(obj)) - - -def _get_attr_fields(obj: Any) -> Sequence["_attr_module.Attribute[Any]"]: - """Get fields for an attrs object.""" - return _attr_module.fields(type(obj)) if _has_attrs else [] - - -def _is_dataclass_repr(obj: object) -> bool: - """Check if an instance of a dataclass contains the default repr. - - Args: - obj (object): A dataclass instance. - - Returns: - bool: True if the default repr is used, False if there is a custom repr. - """ - # Digging in to a lot of internals here - # Catching all exceptions in case something is missing on a non CPython implementation - try: - return obj.__repr__.__code__.co_filename == dataclasses.__file__ - except Exception: # pragma: no coverage - return False - - -_dummy_namedtuple = collections.namedtuple("_dummy_namedtuple", []) - - -def _has_default_namedtuple_repr(obj: object) -> bool: - """Check if an instance of namedtuple contains the default repr - - Args: - obj (object): A namedtuple - - Returns: - bool: True if the default repr is used, False if there's a custom repr. - """ - obj_file = None - try: - obj_file = inspect.getfile(obj.__repr__) - except (OSError, TypeError): - # OSError handles case where object is defined in __main__ scope, e.g. REPL - no filename available. - # TypeError trapped defensively, in case of object without filename slips through. - pass - default_repr_file = inspect.getfile(_dummy_namedtuple.__repr__) - return obj_file == default_repr_file - - -def _ipy_display_hook( - value: Any, - console: Optional["Console"] = None, - overflow: "OverflowMethod" = "ignore", - crop: bool = False, - indent_guides: bool = False, - max_length: Optional[int] = None, - max_string: Optional[int] = None, - max_depth: Optional[int] = None, - expand_all: bool = False, -) -> Union[str, None]: - # needed here to prevent circular import: - from .console import ConsoleRenderable - - # always skip rich generated jupyter renderables or None values - if _safe_isinstance(value, JupyterRenderable) or value is None: - return None - - console = console or get_console() - - with console.capture() as capture: - # certain renderables should start on a new line - if _safe_isinstance(value, ConsoleRenderable): - console.line() - console.print( - value - if _safe_isinstance(value, RichRenderable) - else Pretty( - value, - overflow=overflow, - indent_guides=indent_guides, - max_length=max_length, - max_string=max_string, - max_depth=max_depth, - expand_all=expand_all, - margin=12, - ), - crop=crop, - new_line_start=True, - end="", - ) - # strip trailing newline, not usually part of a text repr - # I'm not sure if this should be prevented at a lower level - return capture.get().rstrip("\n") - - -def _safe_isinstance( - obj: object, class_or_tuple: Union[type, Tuple[type, ...]] -) -> bool: - """isinstance can fail in rare cases, for example types with no __class__""" - try: - return isinstance(obj, class_or_tuple) - except Exception: - return False - - -def install( - console: Optional["Console"] = None, - overflow: "OverflowMethod" = "ignore", - crop: bool = False, - indent_guides: bool = False, - max_length: Optional[int] = None, - max_string: Optional[int] = None, - max_depth: Optional[int] = None, - expand_all: bool = False, -) -> None: - """Install automatic pretty printing in the Python REPL. - - Args: - console (Console, optional): Console instance or ``None`` to use global console. Defaults to None. - overflow (Optional[OverflowMethod], optional): Overflow method. Defaults to "ignore". - crop (Optional[bool], optional): Enable cropping of long lines. Defaults to False. - indent_guides (bool, optional): Enable indentation guides. Defaults to False. - max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to None. - max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to None. - max_depth (int, optional): Maximum depth of nested data structures, or None for no maximum. Defaults to None. - expand_all (bool, optional): Expand all containers. Defaults to False. - max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100. - """ - from pip._vendor.rich import get_console - - console = console or get_console() - assert console is not None - - def display_hook(value: Any) -> None: - """Replacement sys.displayhook which prettifies objects with Rich.""" - if value is not None: - assert console is not None - builtins._ = None # type: ignore[attr-defined] - console.print( - value - if _safe_isinstance(value, RichRenderable) - else Pretty( - value, - overflow=overflow, - indent_guides=indent_guides, - max_length=max_length, - max_string=max_string, - max_depth=max_depth, - expand_all=expand_all, - ), - crop=crop, - ) - builtins._ = value # type: ignore[attr-defined] - - if "get_ipython" in globals(): - ip = get_ipython() # type: ignore[name-defined] - from IPython.core.formatters import BaseFormatter - - class RichFormatter(BaseFormatter): # type: ignore[misc] - pprint: bool = True - - def __call__(self, value: Any) -> Any: - if self.pprint: - return _ipy_display_hook( - value, - console=get_console(), - overflow=overflow, - indent_guides=indent_guides, - max_length=max_length, - max_string=max_string, - max_depth=max_depth, - expand_all=expand_all, - ) - else: - return repr(value) - - # replace plain text formatter with rich formatter - rich_formatter = RichFormatter() - ip.display_formatter.formatters["text/plain"] = rich_formatter - else: - sys.displayhook = display_hook - - -class Pretty(JupyterMixin): - """A rich renderable that pretty prints an object. - - Args: - _object (Any): An object to pretty print. - highlighter (HighlighterType, optional): Highlighter object to apply to result, or None for ReprHighlighter. Defaults to None. - indent_size (int, optional): Number of spaces in indent. Defaults to 4. - justify (JustifyMethod, optional): Justify method, or None for default. Defaults to None. - overflow (OverflowMethod, optional): Overflow method, or None for default. Defaults to None. - no_wrap (Optional[bool], optional): Disable word wrapping. Defaults to False. - indent_guides (bool, optional): Enable indentation guides. Defaults to False. - max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to None. - max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to None. - max_depth (int, optional): Maximum depth of nested data structures, or None for no maximum. Defaults to None. - expand_all (bool, optional): Expand all containers. Defaults to False. - margin (int, optional): Subtrace a margin from width to force containers to expand earlier. Defaults to 0. - insert_line (bool, optional): Insert a new line if the output has multiple new lines. Defaults to False. - """ - - def __init__( - self, - _object: Any, - highlighter: Optional["HighlighterType"] = None, - *, - indent_size: int = 4, - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - no_wrap: Optional[bool] = False, - indent_guides: bool = False, - max_length: Optional[int] = None, - max_string: Optional[int] = None, - max_depth: Optional[int] = None, - expand_all: bool = False, - margin: int = 0, - insert_line: bool = False, - ) -> None: - self._object = _object - self.highlighter = highlighter or ReprHighlighter() - self.indent_size = indent_size - self.justify: Optional["JustifyMethod"] = justify - self.overflow: Optional["OverflowMethod"] = overflow - self.no_wrap = no_wrap - self.indent_guides = indent_guides - self.max_length = max_length - self.max_string = max_string - self.max_depth = max_depth - self.expand_all = expand_all - self.margin = margin - self.insert_line = insert_line - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - pretty_str = pretty_repr( - self._object, - max_width=options.max_width - self.margin, - indent_size=self.indent_size, - max_length=self.max_length, - max_string=self.max_string, - max_depth=self.max_depth, - expand_all=self.expand_all, - ) - pretty_text = Text.from_ansi( - pretty_str, - justify=self.justify or options.justify, - overflow=self.overflow or options.overflow, - no_wrap=pick_bool(self.no_wrap, options.no_wrap), - style="pretty", - ) - pretty_text = ( - self.highlighter(pretty_text) - if pretty_text - else Text( - f"{type(self._object)}.__repr__ returned empty string", - style="dim italic", - ) - ) - if self.indent_guides and not options.ascii_only: - pretty_text = pretty_text.with_indent_guides( - self.indent_size, style="repr.indent" - ) - if self.insert_line and "\n" in pretty_text: - yield "" - yield pretty_text - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - pretty_str = pretty_repr( - self._object, - max_width=options.max_width, - indent_size=self.indent_size, - max_length=self.max_length, - max_string=self.max_string, - max_depth=self.max_depth, - expand_all=self.expand_all, - ) - text_width = ( - max(cell_len(line) for line in pretty_str.splitlines()) if pretty_str else 0 - ) - return Measurement(text_width, text_width) - - -def _get_braces_for_defaultdict(_object: DefaultDict[Any, Any]) -> Tuple[str, str, str]: - return ( - f"defaultdict({_object.default_factory!r}, {{", - "})", - f"defaultdict({_object.default_factory!r}, {{}})", - ) - - -def _get_braces_for_array(_object: "array[Any]") -> Tuple[str, str, str]: - return (f"array({_object.typecode!r}, [", "])", f"array({_object.typecode!r})") - - -_BRACES: Dict[type, Callable[[Any], Tuple[str, str, str]]] = { - os._Environ: lambda _object: ("environ({", "})", "environ({})"), - array: _get_braces_for_array, - defaultdict: _get_braces_for_defaultdict, - Counter: lambda _object: ("Counter({", "})", "Counter()"), - deque: lambda _object: ("deque([", "])", "deque()"), - dict: lambda _object: ("{", "}", "{}"), - UserDict: lambda _object: ("{", "}", "{}"), - frozenset: lambda _object: ("frozenset({", "})", "frozenset()"), - list: lambda _object: ("[", "]", "[]"), - UserList: lambda _object: ("[", "]", "[]"), - set: lambda _object: ("{", "}", "set()"), - tuple: lambda _object: ("(", ")", "()"), - MappingProxyType: lambda _object: ("mappingproxy({", "})", "mappingproxy({})"), -} -_CONTAINERS = tuple(_BRACES.keys()) -_MAPPING_CONTAINERS = (dict, os._Environ, MappingProxyType, UserDict) - - -def is_expandable(obj: Any) -> bool: - """Check if an object may be expanded by pretty print.""" - return ( - _safe_isinstance(obj, _CONTAINERS) - or (is_dataclass(obj)) - or (hasattr(obj, "__rich_repr__")) - or _is_attr_object(obj) - ) and not isclass(obj) - - -@dataclass -class Node: - """A node in a repr tree. May be atomic or a container.""" - - key_repr: str = "" - value_repr: str = "" - open_brace: str = "" - close_brace: str = "" - empty: str = "" - last: bool = False - is_tuple: bool = False - is_namedtuple: bool = False - children: Optional[List["Node"]] = None - key_separator: str = ": " - separator: str = ", " - - def iter_tokens(self) -> Iterable[str]: - """Generate tokens for this node.""" - if self.key_repr: - yield self.key_repr - yield self.key_separator - if self.value_repr: - yield self.value_repr - elif self.children is not None: - if self.children: - yield self.open_brace - if self.is_tuple and not self.is_namedtuple and len(self.children) == 1: - yield from self.children[0].iter_tokens() - yield "," - else: - for child in self.children: - yield from child.iter_tokens() - if not child.last: - yield self.separator - yield self.close_brace - else: - yield self.empty - - def check_length(self, start_length: int, max_length: int) -> bool: - """Check the length fits within a limit. - - Args: - start_length (int): Starting length of the line (indent, prefix, suffix). - max_length (int): Maximum length. - - Returns: - bool: True if the node can be rendered within max length, otherwise False. - """ - total_length = start_length - for token in self.iter_tokens(): - total_length += cell_len(token) - if total_length > max_length: - return False - return True - - def __str__(self) -> str: - repr_text = "".join(self.iter_tokens()) - return repr_text - - def render( - self, max_width: int = 80, indent_size: int = 4, expand_all: bool = False - ) -> str: - """Render the node to a pretty repr. - - Args: - max_width (int, optional): Maximum width of the repr. Defaults to 80. - indent_size (int, optional): Size of indents. Defaults to 4. - expand_all (bool, optional): Expand all levels. Defaults to False. - - Returns: - str: A repr string of the original object. - """ - lines = [_Line(node=self, is_root=True)] - line_no = 0 - while line_no < len(lines): - line = lines[line_no] - if line.expandable and not line.expanded: - if expand_all or not line.check_length(max_width): - lines[line_no : line_no + 1] = line.expand(indent_size) - line_no += 1 - - repr_str = "\n".join(str(line) for line in lines) - return repr_str - - -@dataclass -class _Line: - """A line in repr output.""" - - parent: Optional["_Line"] = None - is_root: bool = False - node: Optional[Node] = None - text: str = "" - suffix: str = "" - whitespace: str = "" - expanded: bool = False - last: bool = False - - @property - def expandable(self) -> bool: - """Check if the line may be expanded.""" - return bool(self.node is not None and self.node.children) - - def check_length(self, max_length: int) -> bool: - """Check this line fits within a given number of cells.""" - start_length = ( - len(self.whitespace) + cell_len(self.text) + cell_len(self.suffix) - ) - assert self.node is not None - return self.node.check_length(start_length, max_length) - - def expand(self, indent_size: int) -> Iterable["_Line"]: - """Expand this line by adding children on their own line.""" - node = self.node - assert node is not None - whitespace = self.whitespace - assert node.children - if node.key_repr: - new_line = yield _Line( - text=f"{node.key_repr}{node.key_separator}{node.open_brace}", - whitespace=whitespace, - ) - else: - new_line = yield _Line(text=node.open_brace, whitespace=whitespace) - child_whitespace = self.whitespace + " " * indent_size - tuple_of_one = node.is_tuple and len(node.children) == 1 - for last, child in loop_last(node.children): - separator = "," if tuple_of_one else node.separator - line = _Line( - parent=new_line, - node=child, - whitespace=child_whitespace, - suffix=separator, - last=last and not tuple_of_one, - ) - yield line - - yield _Line( - text=node.close_brace, - whitespace=whitespace, - suffix=self.suffix, - last=self.last, - ) - - def __str__(self) -> str: - if self.last: - return f"{self.whitespace}{self.text}{self.node or ''}" - else: - return ( - f"{self.whitespace}{self.text}{self.node or ''}{self.suffix.rstrip()}" - ) - - -def _is_namedtuple(obj: Any) -> bool: - """Checks if an object is most likely a namedtuple. It is possible - to craft an object that passes this check and isn't a namedtuple, but - there is only a minuscule chance of this happening unintentionally. - - Args: - obj (Any): The object to test - - Returns: - bool: True if the object is a namedtuple. False otherwise. - """ - try: - fields = getattr(obj, "_fields", None) - except Exception: - # Being very defensive - if we cannot get the attr then its not a namedtuple - return False - return isinstance(obj, tuple) and isinstance(fields, tuple) - - -def traverse( - _object: Any, - max_length: Optional[int] = None, - max_string: Optional[int] = None, - max_depth: Optional[int] = None, -) -> Node: - """Traverse object and generate a tree. - - Args: - _object (Any): Object to be traversed. - max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to None. - max_string (int, optional): Maximum length of string before truncating, or None to disable truncating. - Defaults to None. - max_depth (int, optional): Maximum depth of data structures, or None for no maximum. - Defaults to None. - - Returns: - Node: The root of a tree structure which can be used to render a pretty repr. - """ - - def to_repr(obj: Any) -> str: - """Get repr string for an object, but catch errors.""" - if ( - max_string is not None - and _safe_isinstance(obj, (bytes, str)) - and len(obj) > max_string - ): - truncated = len(obj) - max_string - obj_repr = f"{obj[:max_string]!r}+{truncated}" - else: - try: - obj_repr = repr(obj) - except Exception as error: - obj_repr = f"" - return obj_repr - - visited_ids: Set[int] = set() - push_visited = visited_ids.add - pop_visited = visited_ids.remove - - def _traverse(obj: Any, root: bool = False, depth: int = 0) -> Node: - """Walk the object depth first.""" - - obj_id = id(obj) - if obj_id in visited_ids: - # Recursion detected - return Node(value_repr="...") - - obj_type = type(obj) - children: List[Node] - reached_max_depth = max_depth is not None and depth >= max_depth - - def iter_rich_args(rich_args: Any) -> Iterable[Union[Any, Tuple[str, Any]]]: - for arg in rich_args: - if _safe_isinstance(arg, tuple): - if len(arg) == 3: - key, child, default = arg - if default == child: - continue - yield key, child - elif len(arg) == 2: - key, child = arg - yield key, child - elif len(arg) == 1: - yield arg[0] - else: - yield arg - - try: - fake_attributes = hasattr( - obj, "awehoi234_wdfjwljet234_234wdfoijsdfmmnxpi492" - ) - except Exception: - fake_attributes = False - - rich_repr_result: Optional[RichReprResult] = None - if not fake_attributes: - try: - if hasattr(obj, "__rich_repr__") and not isclass(obj): - rich_repr_result = obj.__rich_repr__() - except Exception: - pass - - if rich_repr_result is not None: - push_visited(obj_id) - angular = getattr(obj.__rich_repr__, "angular", False) - args = list(iter_rich_args(rich_repr_result)) - class_name = obj.__class__.__name__ - - if args: - children = [] - append = children.append - - if reached_max_depth: - if angular: - node = Node(value_repr=f"<{class_name}...>") - else: - node = Node(value_repr=f"{class_name}(...)") - else: - if angular: - node = Node( - open_brace=f"<{class_name} ", - close_brace=">", - children=children, - last=root, - separator=" ", - ) - else: - node = Node( - open_brace=f"{class_name}(", - close_brace=")", - children=children, - last=root, - ) - for last, arg in loop_last(args): - if _safe_isinstance(arg, tuple): - key, child = arg - child_node = _traverse(child, depth=depth + 1) - child_node.last = last - child_node.key_repr = key - child_node.key_separator = "=" - append(child_node) - else: - child_node = _traverse(arg, depth=depth + 1) - child_node.last = last - append(child_node) - else: - node = Node( - value_repr=f"<{class_name}>" if angular else f"{class_name}()", - children=[], - last=root, - ) - pop_visited(obj_id) - elif _is_attr_object(obj) and not fake_attributes: - push_visited(obj_id) - children = [] - append = children.append - - attr_fields = _get_attr_fields(obj) - if attr_fields: - if reached_max_depth: - node = Node(value_repr=f"{obj.__class__.__name__}(...)") - else: - node = Node( - open_brace=f"{obj.__class__.__name__}(", - close_brace=")", - children=children, - last=root, - ) - - def iter_attrs() -> Iterable[ - Tuple[str, Any, Optional[Callable[[Any], str]]] - ]: - """Iterate over attr fields and values.""" - for attr in attr_fields: - if attr.repr: - try: - value = getattr(obj, attr.name) - except Exception as error: - # Can happen, albeit rarely - yield (attr.name, error, None) - else: - yield ( - attr.name, - value, - attr.repr if callable(attr.repr) else None, - ) - - for last, (name, value, repr_callable) in loop_last(iter_attrs()): - if repr_callable: - child_node = Node(value_repr=str(repr_callable(value))) - else: - child_node = _traverse(value, depth=depth + 1) - child_node.last = last - child_node.key_repr = name - child_node.key_separator = "=" - append(child_node) - else: - node = Node( - value_repr=f"{obj.__class__.__name__}()", children=[], last=root - ) - pop_visited(obj_id) - elif ( - is_dataclass(obj) - and not _safe_isinstance(obj, type) - and not fake_attributes - and _is_dataclass_repr(obj) - ): - push_visited(obj_id) - children = [] - append = children.append - if reached_max_depth: - node = Node(value_repr=f"{obj.__class__.__name__}(...)") - else: - node = Node( - open_brace=f"{obj.__class__.__name__}(", - close_brace=")", - children=children, - last=root, - empty=f"{obj.__class__.__name__}()", - ) - - for last, field in loop_last( - field for field in fields(obj) if field.repr - ): - child_node = _traverse(getattr(obj, field.name), depth=depth + 1) - child_node.key_repr = field.name - child_node.last = last - child_node.key_separator = "=" - append(child_node) - - pop_visited(obj_id) - elif _is_namedtuple(obj) and _has_default_namedtuple_repr(obj): - push_visited(obj_id) - class_name = obj.__class__.__name__ - if reached_max_depth: - # If we've reached the max depth, we still show the class name, but not its contents - node = Node( - value_repr=f"{class_name}(...)", - ) - else: - children = [] - append = children.append - node = Node( - open_brace=f"{class_name}(", - close_brace=")", - children=children, - empty=f"{class_name}()", - ) - for last, (key, value) in loop_last(obj._asdict().items()): - child_node = _traverse(value, depth=depth + 1) - child_node.key_repr = key - child_node.last = last - child_node.key_separator = "=" - append(child_node) - pop_visited(obj_id) - elif _safe_isinstance(obj, _CONTAINERS): - for container_type in _CONTAINERS: - if _safe_isinstance(obj, container_type): - obj_type = container_type - break - - push_visited(obj_id) - - open_brace, close_brace, empty = _BRACES[obj_type](obj) - - if reached_max_depth: - node = Node(value_repr=f"{open_brace}...{close_brace}") - elif obj_type.__repr__ != type(obj).__repr__: - node = Node(value_repr=to_repr(obj), last=root) - elif obj: - children = [] - node = Node( - open_brace=open_brace, - close_brace=close_brace, - children=children, - last=root, - ) - append = children.append - num_items = len(obj) - last_item_index = num_items - 1 - - if _safe_isinstance(obj, _MAPPING_CONTAINERS): - iter_items = iter(obj.items()) - if max_length is not None: - iter_items = islice(iter_items, max_length) - for index, (key, child) in enumerate(iter_items): - child_node = _traverse(child, depth=depth + 1) - child_node.key_repr = to_repr(key) - child_node.last = index == last_item_index - append(child_node) - else: - iter_values = iter(obj) - if max_length is not None: - iter_values = islice(iter_values, max_length) - for index, child in enumerate(iter_values): - child_node = _traverse(child, depth=depth + 1) - child_node.last = index == last_item_index - append(child_node) - if max_length is not None and num_items > max_length: - append(Node(value_repr=f"... +{num_items - max_length}", last=True)) - else: - node = Node(empty=empty, children=[], last=root) - - pop_visited(obj_id) - else: - node = Node(value_repr=to_repr(obj), last=root) - node.is_tuple = _safe_isinstance(obj, tuple) - node.is_namedtuple = _is_namedtuple(obj) - return node - - node = _traverse(_object, root=True) - return node - - -def pretty_repr( - _object: Any, - *, - max_width: int = 80, - indent_size: int = 4, - max_length: Optional[int] = None, - max_string: Optional[int] = None, - max_depth: Optional[int] = None, - expand_all: bool = False, -) -> str: - """Prettify repr string by expanding on to new lines to fit within a given width. - - Args: - _object (Any): Object to repr. - max_width (int, optional): Desired maximum width of repr string. Defaults to 80. - indent_size (int, optional): Number of spaces to indent. Defaults to 4. - max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to None. - max_string (int, optional): Maximum length of string before truncating, or None to disable truncating. - Defaults to None. - max_depth (int, optional): Maximum depth of nested data structure, or None for no depth. - Defaults to None. - expand_all (bool, optional): Expand all containers regardless of available width. Defaults to False. - - Returns: - str: A possibly multi-line representation of the object. - """ - - if _safe_isinstance(_object, Node): - node = _object - else: - node = traverse( - _object, max_length=max_length, max_string=max_string, max_depth=max_depth - ) - repr_str: str = node.render( - max_width=max_width, indent_size=indent_size, expand_all=expand_all - ) - return repr_str - - -def pprint( - _object: Any, - *, - console: Optional["Console"] = None, - indent_guides: bool = True, - max_length: Optional[int] = None, - max_string: Optional[int] = None, - max_depth: Optional[int] = None, - expand_all: bool = False, -) -> None: - """A convenience function for pretty printing. - - Args: - _object (Any): Object to pretty print. - console (Console, optional): Console instance, or None to use default. Defaults to None. - max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to None. - max_string (int, optional): Maximum length of strings before truncating, or None to disable. Defaults to None. - max_depth (int, optional): Maximum depth for nested data structures, or None for unlimited depth. Defaults to None. - indent_guides (bool, optional): Enable indentation guides. Defaults to True. - expand_all (bool, optional): Expand all containers. Defaults to False. - """ - _console = get_console() if console is None else console - _console.print( - Pretty( - _object, - max_length=max_length, - max_string=max_string, - max_depth=max_depth, - indent_guides=indent_guides, - expand_all=expand_all, - overflow="ignore", - ), - soft_wrap=True, - ) - - -if __name__ == "__main__": # pragma: no cover - - class BrokenRepr: - def __repr__(self) -> str: - 1 / 0 - return "this will fail" - - from typing import NamedTuple - - class StockKeepingUnit(NamedTuple): - name: str - description: str - price: float - category: str - reviews: List[str] - - d = defaultdict(int) - d["foo"] = 5 - data = { - "foo": [ - 1, - "Hello World!", - 100.123, - 323.232, - 432324.0, - {5, 6, 7, (1, 2, 3, 4), 8}, - ], - "bar": frozenset({1, 2, 3}), - "defaultdict": defaultdict( - list, {"crumble": ["apple", "rhubarb", "butter", "sugar", "flour"]} - ), - "counter": Counter( - [ - "apple", - "orange", - "pear", - "kumquat", - "kumquat", - "durian" * 100, - ] - ), - "atomic": (False, True, None), - "namedtuple": StockKeepingUnit( - "Sparkling British Spring Water", - "Carbonated spring water", - 0.9, - "water", - ["its amazing!", "its terrible!"], - ), - "Broken": BrokenRepr(), - } - data["foo"].append(data) # type: ignore[attr-defined] - - from pip._vendor.rich import print - - # print(Pretty(data, indent_guides=True, max_string=20)) - - class Thing: - def __repr__(self) -> str: - return "Hello\x1b[38;5;239m World!" - - print(Pretty(Thing())) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/text.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/text.py deleted file mode 100644 index 998cb87dab758332ecc17f8acddbd0378beef160..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/text.py +++ /dev/null @@ -1,1307 +0,0 @@ -import re -from functools import partial, reduce -from math import gcd -from operator import itemgetter -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Dict, - Iterable, - List, - NamedTuple, - Optional, - Tuple, - Union, -) - -from ._loop import loop_last -from ._pick import pick_bool -from ._wrap import divide_line -from .align import AlignMethod -from .cells import cell_len, set_cell_size -from .containers import Lines -from .control import strip_control_codes -from .emoji import EmojiVariant -from .jupyter import JupyterMixin -from .measure import Measurement -from .segment import Segment -from .style import Style, StyleType - -if TYPE_CHECKING: # pragma: no cover - from .console import Console, ConsoleOptions, JustifyMethod, OverflowMethod - -DEFAULT_JUSTIFY: "JustifyMethod" = "default" -DEFAULT_OVERFLOW: "OverflowMethod" = "fold" - - -_re_whitespace = re.compile(r"\s+$") - -TextType = Union[str, "Text"] - -GetStyleCallable = Callable[[str], Optional[StyleType]] - - -class Span(NamedTuple): - """A marked up region in some text.""" - - start: int - """Span start index.""" - end: int - """Span end index.""" - style: Union[str, Style] - """Style associated with the span.""" - - def __repr__(self) -> str: - return f"Span({self.start}, {self.end}, {self.style!r})" - - def __bool__(self) -> bool: - return self.end > self.start - - def split(self, offset: int) -> Tuple["Span", Optional["Span"]]: - """Split a span in to 2 from a given offset.""" - - if offset < self.start: - return self, None - if offset >= self.end: - return self, None - - start, end, style = self - span1 = Span(start, min(end, offset), style) - span2 = Span(span1.end, end, style) - return span1, span2 - - def move(self, offset: int) -> "Span": - """Move start and end by a given offset. - - Args: - offset (int): Number of characters to add to start and end. - - Returns: - TextSpan: A new TextSpan with adjusted position. - """ - start, end, style = self - return Span(start + offset, end + offset, style) - - def right_crop(self, offset: int) -> "Span": - """Crop the span at the given offset. - - Args: - offset (int): A value between start and end. - - Returns: - Span: A new (possibly smaller) span. - """ - start, end, style = self - if offset >= end: - return self - return Span(start, min(offset, end), style) - - -class Text(JupyterMixin): - """Text with color / style. - - Args: - text (str, optional): Default unstyled text. Defaults to "". - style (Union[str, Style], optional): Base style for text. Defaults to "". - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - no_wrap (bool, optional): Disable text wrapping, or None for default. Defaults to None. - end (str, optional): Character to end text with. Defaults to "\\\\n". - tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8. - spans (List[Span], optional). A list of predefined style spans. Defaults to None. - """ - - __slots__ = [ - "_text", - "style", - "justify", - "overflow", - "no_wrap", - "end", - "tab_size", - "_spans", - "_length", - ] - - def __init__( - self, - text: str = "", - style: Union[str, Style] = "", - *, - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - no_wrap: Optional[bool] = None, - end: str = "\n", - tab_size: Optional[int] = 8, - spans: Optional[List[Span]] = None, - ) -> None: - sanitized_text = strip_control_codes(text) - self._text = [sanitized_text] - self.style = style - self.justify: Optional["JustifyMethod"] = justify - self.overflow: Optional["OverflowMethod"] = overflow - self.no_wrap = no_wrap - self.end = end - self.tab_size = tab_size - self._spans: List[Span] = spans or [] - self._length: int = len(sanitized_text) - - def __len__(self) -> int: - return self._length - - def __bool__(self) -> bool: - return bool(self._length) - - def __str__(self) -> str: - return self.plain - - def __repr__(self) -> str: - return f"" - - def __add__(self, other: Any) -> "Text": - if isinstance(other, (str, Text)): - result = self.copy() - result.append(other) - return result - return NotImplemented - - def __eq__(self, other: object) -> bool: - if not isinstance(other, Text): - return NotImplemented - return self.plain == other.plain and self._spans == other._spans - - def __contains__(self, other: object) -> bool: - if isinstance(other, str): - return other in self.plain - elif isinstance(other, Text): - return other.plain in self.plain - return False - - def __getitem__(self, slice: Union[int, slice]) -> "Text": - def get_text_at(offset: int) -> "Text": - _Span = Span - text = Text( - self.plain[offset], - spans=[ - _Span(0, 1, style) - for start, end, style in self._spans - if end > offset >= start - ], - end="", - ) - return text - - if isinstance(slice, int): - return get_text_at(slice) - else: - start, stop, step = slice.indices(len(self.plain)) - if step == 1: - lines = self.divide([start, stop]) - return lines[1] - else: - # This would be a bit of work to implement efficiently - # For now, its not required - raise TypeError("slices with step!=1 are not supported") - - @property - def cell_len(self) -> int: - """Get the number of cells required to render this text.""" - return cell_len(self.plain) - - @property - def markup(self) -> str: - """Get console markup to render this Text. - - Returns: - str: A string potentially creating markup tags. - """ - from .markup import escape - - output: List[str] = [] - - plain = self.plain - markup_spans = [ - (0, False, self.style), - *((span.start, False, span.style) for span in self._spans), - *((span.end, True, span.style) for span in self._spans), - (len(plain), True, self.style), - ] - markup_spans.sort(key=itemgetter(0, 1)) - position = 0 - append = output.append - for offset, closing, style in markup_spans: - if offset > position: - append(escape(plain[position:offset])) - position = offset - if style: - append(f"[/{style}]" if closing else f"[{style}]") - markup = "".join(output) - return markup - - @classmethod - def from_markup( - cls, - text: str, - *, - style: Union[str, Style] = "", - emoji: bool = True, - emoji_variant: Optional[EmojiVariant] = None, - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - end: str = "\n", - ) -> "Text": - """Create Text instance from markup. - - Args: - text (str): A string containing console markup. - emoji (bool, optional): Also render emoji code. Defaults to True. - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - end (str, optional): Character to end text with. Defaults to "\\\\n". - - Returns: - Text: A Text instance with markup rendered. - """ - from .markup import render - - rendered_text = render(text, style, emoji=emoji, emoji_variant=emoji_variant) - rendered_text.justify = justify - rendered_text.overflow = overflow - rendered_text.end = end - return rendered_text - - @classmethod - def from_ansi( - cls, - text: str, - *, - style: Union[str, Style] = "", - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - no_wrap: Optional[bool] = None, - end: str = "\n", - tab_size: Optional[int] = 8, - ) -> "Text": - """Create a Text object from a string containing ANSI escape codes. - - Args: - text (str): A string containing escape codes. - style (Union[str, Style], optional): Base style for text. Defaults to "". - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - no_wrap (bool, optional): Disable text wrapping, or None for default. Defaults to None. - end (str, optional): Character to end text with. Defaults to "\\\\n". - tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8. - """ - from .ansi import AnsiDecoder - - joiner = Text( - "\n", - justify=justify, - overflow=overflow, - no_wrap=no_wrap, - end=end, - tab_size=tab_size, - style=style, - ) - decoder = AnsiDecoder() - result = joiner.join(line for line in decoder.decode(text)) - return result - - @classmethod - def styled( - cls, - text: str, - style: StyleType = "", - *, - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - ) -> "Text": - """Construct a Text instance with a pre-applied styled. A style applied in this way won't be used - to pad the text when it is justified. - - Args: - text (str): A string containing console markup. - style (Union[str, Style]): Style to apply to the text. Defaults to "". - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - - Returns: - Text: A text instance with a style applied to the entire string. - """ - styled_text = cls(text, justify=justify, overflow=overflow) - styled_text.stylize(style) - return styled_text - - @classmethod - def assemble( - cls, - *parts: Union[str, "Text", Tuple[str, StyleType]], - style: Union[str, Style] = "", - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - no_wrap: Optional[bool] = None, - end: str = "\n", - tab_size: int = 8, - meta: Optional[Dict[str, Any]] = None, - ) -> "Text": - """Construct a text instance by combining a sequence of strings with optional styles. - The positional arguments should be either strings, or a tuple of string + style. - - Args: - style (Union[str, Style], optional): Base style for text. Defaults to "". - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - end (str, optional): Character to end text with. Defaults to "\\\\n". - tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8. - meta (Dict[str, Any], optional). Meta data to apply to text, or None for no meta data. Default to None - - Returns: - Text: A new text instance. - """ - text = cls( - style=style, - justify=justify, - overflow=overflow, - no_wrap=no_wrap, - end=end, - tab_size=tab_size, - ) - append = text.append - _Text = Text - for part in parts: - if isinstance(part, (_Text, str)): - append(part) - else: - append(*part) - if meta: - text.apply_meta(meta) - return text - - @property - def plain(self) -> str: - """Get the text as a single string.""" - if len(self._text) != 1: - self._text[:] = ["".join(self._text)] - return self._text[0] - - @plain.setter - def plain(self, new_text: str) -> None: - """Set the text to a new value.""" - if new_text != self.plain: - sanitized_text = strip_control_codes(new_text) - self._text[:] = [sanitized_text] - old_length = self._length - self._length = len(sanitized_text) - if old_length > self._length: - self._trim_spans() - - @property - def spans(self) -> List[Span]: - """Get a reference to the internal list of spans.""" - return self._spans - - @spans.setter - def spans(self, spans: List[Span]) -> None: - """Set spans.""" - self._spans = spans[:] - - def blank_copy(self, plain: str = "") -> "Text": - """Return a new Text instance with copied meta data (but not the string or spans).""" - copy_self = Text( - plain, - style=self.style, - justify=self.justify, - overflow=self.overflow, - no_wrap=self.no_wrap, - end=self.end, - tab_size=self.tab_size, - ) - return copy_self - - def copy(self) -> "Text": - """Return a copy of this instance.""" - copy_self = Text( - self.plain, - style=self.style, - justify=self.justify, - overflow=self.overflow, - no_wrap=self.no_wrap, - end=self.end, - tab_size=self.tab_size, - ) - copy_self._spans[:] = self._spans - return copy_self - - def stylize( - self, - style: Union[str, Style], - start: int = 0, - end: Optional[int] = None, - ) -> None: - """Apply a style to the text, or a portion of the text. - - Args: - style (Union[str, Style]): Style instance or style definition to apply. - start (int): Start offset (negative indexing is supported). Defaults to 0. - end (Optional[int], optional): End offset (negative indexing is supported), or None for end of text. Defaults to None. - """ - if style: - length = len(self) - if start < 0: - start = length + start - if end is None: - end = length - if end < 0: - end = length + end - if start >= length or end <= start: - # Span not in text or not valid - return - self._spans.append(Span(start, min(length, end), style)) - - def stylize_before( - self, - style: Union[str, Style], - start: int = 0, - end: Optional[int] = None, - ) -> None: - """Apply a style to the text, or a portion of the text. Styles will be applied before other styles already present. - - Args: - style (Union[str, Style]): Style instance or style definition to apply. - start (int): Start offset (negative indexing is supported). Defaults to 0. - end (Optional[int], optional): End offset (negative indexing is supported), or None for end of text. Defaults to None. - """ - if style: - length = len(self) - if start < 0: - start = length + start - if end is None: - end = length - if end < 0: - end = length + end - if start >= length or end <= start: - # Span not in text or not valid - return - self._spans.insert(0, Span(start, min(length, end), style)) - - def apply_meta( - self, meta: Dict[str, Any], start: int = 0, end: Optional[int] = None - ) -> None: - """Apply meta data to the text, or a portion of the text. - - Args: - meta (Dict[str, Any]): A dict of meta information. - start (int): Start offset (negative indexing is supported). Defaults to 0. - end (Optional[int], optional): End offset (negative indexing is supported), or None for end of text. Defaults to None. - - """ - style = Style.from_meta(meta) - self.stylize(style, start=start, end=end) - - def on(self, meta: Optional[Dict[str, Any]] = None, **handlers: Any) -> "Text": - """Apply event handlers (used by Textual project). - - Example: - >>> from rich.text import Text - >>> text = Text("hello world") - >>> text.on(click="view.toggle('world')") - - Args: - meta (Dict[str, Any]): Mapping of meta information. - **handlers: Keyword args are prefixed with "@" to defined handlers. - - Returns: - Text: Self is returned to method may be chained. - """ - meta = {} if meta is None else meta - meta.update({f"@{key}": value for key, value in handlers.items()}) - self.stylize(Style.from_meta(meta)) - return self - - def remove_suffix(self, suffix: str) -> None: - """Remove a suffix if it exists. - - Args: - suffix (str): Suffix to remove. - """ - if self.plain.endswith(suffix): - self.right_crop(len(suffix)) - - def get_style_at_offset(self, console: "Console", offset: int) -> Style: - """Get the style of a character at give offset. - - Args: - console (~Console): Console where text will be rendered. - offset (int): Offset in to text (negative indexing supported) - - Returns: - Style: A Style instance. - """ - # TODO: This is a little inefficient, it is only used by full justify - if offset < 0: - offset = len(self) + offset - get_style = console.get_style - style = get_style(self.style).copy() - for start, end, span_style in self._spans: - if end > offset >= start: - style += get_style(span_style, default="") - return style - - def highlight_regex( - self, - re_highlight: str, - style: Optional[Union[GetStyleCallable, StyleType]] = None, - *, - style_prefix: str = "", - ) -> int: - """Highlight text with a regular expression, where group names are - translated to styles. - - Args: - re_highlight (str): A regular expression. - style (Union[GetStyleCallable, StyleType]): Optional style to apply to whole match, or a callable - which accepts the matched text and returns a style. Defaults to None. - style_prefix (str, optional): Optional prefix to add to style group names. - - Returns: - int: Number of regex matches - """ - count = 0 - append_span = self._spans.append - _Span = Span - plain = self.plain - for match in re.finditer(re_highlight, plain): - get_span = match.span - if style: - start, end = get_span() - match_style = style(plain[start:end]) if callable(style) else style - if match_style is not None and end > start: - append_span(_Span(start, end, match_style)) - - count += 1 - for name in match.groupdict().keys(): - start, end = get_span(name) - if start != -1 and end > start: - append_span(_Span(start, end, f"{style_prefix}{name}")) - return count - - def highlight_words( - self, - words: Iterable[str], - style: Union[str, Style], - *, - case_sensitive: bool = True, - ) -> int: - """Highlight words with a style. - - Args: - words (Iterable[str]): Worlds to highlight. - style (Union[str, Style]): Style to apply. - case_sensitive (bool, optional): Enable case sensitive matchings. Defaults to True. - - Returns: - int: Number of words highlighted. - """ - re_words = "|".join(re.escape(word) for word in words) - add_span = self._spans.append - count = 0 - _Span = Span - for match in re.finditer( - re_words, self.plain, flags=0 if case_sensitive else re.IGNORECASE - ): - start, end = match.span(0) - add_span(_Span(start, end, style)) - count += 1 - return count - - def rstrip(self) -> None: - """Strip whitespace from end of text.""" - self.plain = self.plain.rstrip() - - def rstrip_end(self, size: int) -> None: - """Remove whitespace beyond a certain width at the end of the text. - - Args: - size (int): The desired size of the text. - """ - text_length = len(self) - if text_length > size: - excess = text_length - size - whitespace_match = _re_whitespace.search(self.plain) - if whitespace_match is not None: - whitespace_count = len(whitespace_match.group(0)) - self.right_crop(min(whitespace_count, excess)) - - def set_length(self, new_length: int) -> None: - """Set new length of the text, clipping or padding is required.""" - length = len(self) - if length != new_length: - if length < new_length: - self.pad_right(new_length - length) - else: - self.right_crop(length - new_length) - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> Iterable[Segment]: - tab_size: int = console.tab_size or self.tab_size or 8 - justify = self.justify or options.justify or DEFAULT_JUSTIFY - - overflow = self.overflow or options.overflow or DEFAULT_OVERFLOW - - lines = self.wrap( - console, - options.max_width, - justify=justify, - overflow=overflow, - tab_size=tab_size or 8, - no_wrap=pick_bool(self.no_wrap, options.no_wrap, False), - ) - all_lines = Text("\n").join(lines) - yield from all_lines.render(console, end=self.end) - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> Measurement: - text = self.plain - lines = text.splitlines() - max_text_width = max(cell_len(line) for line in lines) if lines else 0 - words = text.split() - min_text_width = ( - max(cell_len(word) for word in words) if words else max_text_width - ) - return Measurement(min_text_width, max_text_width) - - def render(self, console: "Console", end: str = "") -> Iterable["Segment"]: - """Render the text as Segments. - - Args: - console (Console): Console instance. - end (Optional[str], optional): Optional end character. - - Returns: - Iterable[Segment]: Result of render that may be written to the console. - """ - _Segment = Segment - text = self.plain - if not self._spans: - yield Segment(text) - if end: - yield _Segment(end) - return - get_style = partial(console.get_style, default=Style.null()) - - enumerated_spans = list(enumerate(self._spans, 1)) - style_map = {index: get_style(span.style) for index, span in enumerated_spans} - style_map[0] = get_style(self.style) - - spans = [ - (0, False, 0), - *((span.start, False, index) for index, span in enumerated_spans), - *((span.end, True, index) for index, span in enumerated_spans), - (len(text), True, 0), - ] - spans.sort(key=itemgetter(0, 1)) - - stack: List[int] = [] - stack_append = stack.append - stack_pop = stack.remove - - style_cache: Dict[Tuple[Style, ...], Style] = {} - style_cache_get = style_cache.get - combine = Style.combine - - def get_current_style() -> Style: - """Construct current style from stack.""" - styles = tuple(style_map[_style_id] for _style_id in sorted(stack)) - cached_style = style_cache_get(styles) - if cached_style is not None: - return cached_style - current_style = combine(styles) - style_cache[styles] = current_style - return current_style - - for (offset, leaving, style_id), (next_offset, _, _) in zip(spans, spans[1:]): - if leaving: - stack_pop(style_id) - else: - stack_append(style_id) - if next_offset > offset: - yield _Segment(text[offset:next_offset], get_current_style()) - if end: - yield _Segment(end) - - def join(self, lines: Iterable["Text"]) -> "Text": - """Join text together with this instance as the separator. - - Args: - lines (Iterable[Text]): An iterable of Text instances to join. - - Returns: - Text: A new text instance containing join text. - """ - - new_text = self.blank_copy() - - def iter_text() -> Iterable["Text"]: - if self.plain: - for last, line in loop_last(lines): - yield line - if not last: - yield self - else: - yield from lines - - extend_text = new_text._text.extend - append_span = new_text._spans.append - extend_spans = new_text._spans.extend - offset = 0 - _Span = Span - - for text in iter_text(): - extend_text(text._text) - if text.style: - append_span(_Span(offset, offset + len(text), text.style)) - extend_spans( - _Span(offset + start, offset + end, style) - for start, end, style in text._spans - ) - offset += len(text) - new_text._length = offset - return new_text - - def expand_tabs(self, tab_size: Optional[int] = None) -> None: - """Converts tabs to spaces. - - Args: - tab_size (int, optional): Size of tabs. Defaults to 8. - - """ - if "\t" not in self.plain: - return - pos = 0 - if tab_size is None: - tab_size = self.tab_size - assert tab_size is not None - result = self.blank_copy() - append = result.append - - _style = self.style - for line in self.split("\n", include_separator=True): - parts = line.split("\t", include_separator=True) - for part in parts: - if part.plain.endswith("\t"): - part._text = [part.plain[:-1] + " "] - append(part) - pos += len(part) - spaces = tab_size - ((pos - 1) % tab_size) - 1 - if spaces: - append(" " * spaces, _style) - pos += spaces - else: - append(part) - self._text = [result.plain] - self._length = len(self.plain) - self._spans[:] = result._spans - - def truncate( - self, - max_width: int, - *, - overflow: Optional["OverflowMethod"] = None, - pad: bool = False, - ) -> None: - """Truncate text if it is longer that a given width. - - Args: - max_width (int): Maximum number of characters in text. - overflow (str, optional): Overflow method: "crop", "fold", or "ellipsis". Defaults to None, to use self.overflow. - pad (bool, optional): Pad with spaces if the length is less than max_width. Defaults to False. - """ - _overflow = overflow or self.overflow or DEFAULT_OVERFLOW - if _overflow != "ignore": - length = cell_len(self.plain) - if length > max_width: - if _overflow == "ellipsis": - self.plain = set_cell_size(self.plain, max_width - 1) + "…" - else: - self.plain = set_cell_size(self.plain, max_width) - if pad and length < max_width: - spaces = max_width - length - self._text = [f"{self.plain}{' ' * spaces}"] - self._length = len(self.plain) - - def _trim_spans(self) -> None: - """Remove or modify any spans that are over the end of the text.""" - max_offset = len(self.plain) - _Span = Span - self._spans[:] = [ - ( - span - if span.end < max_offset - else _Span(span.start, min(max_offset, span.end), span.style) - ) - for span in self._spans - if span.start < max_offset - ] - - def pad(self, count: int, character: str = " ") -> None: - """Pad left and right with a given number of characters. - - Args: - count (int): Width of padding. - """ - assert len(character) == 1, "Character must be a string of length 1" - if count: - pad_characters = character * count - self.plain = f"{pad_characters}{self.plain}{pad_characters}" - _Span = Span - self._spans[:] = [ - _Span(start + count, end + count, style) - for start, end, style in self._spans - ] - - def pad_left(self, count: int, character: str = " ") -> None: - """Pad the left with a given character. - - Args: - count (int): Number of characters to pad. - character (str, optional): Character to pad with. Defaults to " ". - """ - assert len(character) == 1, "Character must be a string of length 1" - if count: - self.plain = f"{character * count}{self.plain}" - _Span = Span - self._spans[:] = [ - _Span(start + count, end + count, style) - for start, end, style in self._spans - ] - - def pad_right(self, count: int, character: str = " ") -> None: - """Pad the right with a given character. - - Args: - count (int): Number of characters to pad. - character (str, optional): Character to pad with. Defaults to " ". - """ - assert len(character) == 1, "Character must be a string of length 1" - if count: - self.plain = f"{self.plain}{character * count}" - - def align(self, align: AlignMethod, width: int, character: str = " ") -> None: - """Align text to a given width. - - Args: - align (AlignMethod): One of "left", "center", or "right". - width (int): Desired width. - character (str, optional): Character to pad with. Defaults to " ". - """ - self.truncate(width) - excess_space = width - cell_len(self.plain) - if excess_space: - if align == "left": - self.pad_right(excess_space, character) - elif align == "center": - left = excess_space // 2 - self.pad_left(left, character) - self.pad_right(excess_space - left, character) - else: - self.pad_left(excess_space, character) - - def append( - self, text: Union["Text", str], style: Optional[Union[str, "Style"]] = None - ) -> "Text": - """Add text with an optional style. - - Args: - text (Union[Text, str]): A str or Text to append. - style (str, optional): A style name. Defaults to None. - - Returns: - Text: Returns self for chaining. - """ - - if not isinstance(text, (str, Text)): - raise TypeError("Only str or Text can be appended to Text") - - if len(text): - if isinstance(text, str): - sanitized_text = strip_control_codes(text) - self._text.append(sanitized_text) - offset = len(self) - text_length = len(sanitized_text) - if style is not None: - self._spans.append(Span(offset, offset + text_length, style)) - self._length += text_length - elif isinstance(text, Text): - _Span = Span - if style is not None: - raise ValueError( - "style must not be set when appending Text instance" - ) - text_length = self._length - if text.style is not None: - self._spans.append( - _Span(text_length, text_length + len(text), text.style) - ) - self._text.append(text.plain) - self._spans.extend( - _Span(start + text_length, end + text_length, style) - for start, end, style in text._spans - ) - self._length += len(text) - return self - - def append_text(self, text: "Text") -> "Text": - """Append another Text instance. This method is more performant that Text.append, but - only works for Text. - - Returns: - Text: Returns self for chaining. - """ - _Span = Span - text_length = self._length - if text.style is not None: - self._spans.append(_Span(text_length, text_length + len(text), text.style)) - self._text.append(text.plain) - self._spans.extend( - _Span(start + text_length, end + text_length, style) - for start, end, style in text._spans - ) - self._length += len(text) - return self - - def append_tokens( - self, tokens: Iterable[Tuple[str, Optional[StyleType]]] - ) -> "Text": - """Append iterable of str and style. Style may be a Style instance or a str style definition. - - Args: - pairs (Iterable[Tuple[str, Optional[StyleType]]]): An iterable of tuples containing str content and style. - - Returns: - Text: Returns self for chaining. - """ - append_text = self._text.append - append_span = self._spans.append - _Span = Span - offset = len(self) - for content, style in tokens: - append_text(content) - if style is not None: - append_span(_Span(offset, offset + len(content), style)) - offset += len(content) - self._length = offset - return self - - def copy_styles(self, text: "Text") -> None: - """Copy styles from another Text instance. - - Args: - text (Text): A Text instance to copy styles from, must be the same length. - """ - self._spans.extend(text._spans) - - def split( - self, - separator: str = "\n", - *, - include_separator: bool = False, - allow_blank: bool = False, - ) -> Lines: - """Split rich text in to lines, preserving styles. - - Args: - separator (str, optional): String to split on. Defaults to "\\\\n". - include_separator (bool, optional): Include the separator in the lines. Defaults to False. - allow_blank (bool, optional): Return a blank line if the text ends with a separator. Defaults to False. - - Returns: - List[RichText]: A list of rich text, one per line of the original. - """ - assert separator, "separator must not be empty" - - text = self.plain - if separator not in text: - return Lines([self.copy()]) - - if include_separator: - lines = self.divide( - match.end() for match in re.finditer(re.escape(separator), text) - ) - else: - - def flatten_spans() -> Iterable[int]: - for match in re.finditer(re.escape(separator), text): - start, end = match.span() - yield start - yield end - - lines = Lines( - line for line in self.divide(flatten_spans()) if line.plain != separator - ) - - if not allow_blank and text.endswith(separator): - lines.pop() - - return lines - - def divide(self, offsets: Iterable[int]) -> Lines: - """Divide text in to a number of lines at given offsets. - - Args: - offsets (Iterable[int]): Offsets used to divide text. - - Returns: - Lines: New RichText instances between offsets. - """ - _offsets = list(offsets) - - if not _offsets: - return Lines([self.copy()]) - - text = self.plain - text_length = len(text) - divide_offsets = [0, *_offsets, text_length] - line_ranges = list(zip(divide_offsets, divide_offsets[1:])) - - style = self.style - justify = self.justify - overflow = self.overflow - _Text = Text - new_lines = Lines( - _Text( - text[start:end], - style=style, - justify=justify, - overflow=overflow, - ) - for start, end in line_ranges - ) - if not self._spans: - return new_lines - - _line_appends = [line._spans.append for line in new_lines._lines] - line_count = len(line_ranges) - _Span = Span - - for span_start, span_end, style in self._spans: - - lower_bound = 0 - upper_bound = line_count - start_line_no = (lower_bound + upper_bound) // 2 - - while True: - line_start, line_end = line_ranges[start_line_no] - if span_start < line_start: - upper_bound = start_line_no - 1 - elif span_start > line_end: - lower_bound = start_line_no + 1 - else: - break - start_line_no = (lower_bound + upper_bound) // 2 - - if span_end < line_end: - end_line_no = start_line_no - else: - end_line_no = lower_bound = start_line_no - upper_bound = line_count - - while True: - line_start, line_end = line_ranges[end_line_no] - if span_end < line_start: - upper_bound = end_line_no - 1 - elif span_end > line_end: - lower_bound = end_line_no + 1 - else: - break - end_line_no = (lower_bound + upper_bound) // 2 - - for line_no in range(start_line_no, end_line_no + 1): - line_start, line_end = line_ranges[line_no] - new_start = max(0, span_start - line_start) - new_end = min(span_end - line_start, line_end - line_start) - if new_end > new_start: - _line_appends[line_no](_Span(new_start, new_end, style)) - - return new_lines - - def right_crop(self, amount: int = 1) -> None: - """Remove a number of characters from the end of the text.""" - max_offset = len(self.plain) - amount - _Span = Span - self._spans[:] = [ - ( - span - if span.end < max_offset - else _Span(span.start, min(max_offset, span.end), span.style) - ) - for span in self._spans - if span.start < max_offset - ] - self._text = [self.plain[:-amount]] - self._length -= amount - - def wrap( - self, - console: "Console", - width: int, - *, - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - tab_size: int = 8, - no_wrap: Optional[bool] = None, - ) -> Lines: - """Word wrap the text. - - Args: - console (Console): Console instance. - width (int): Number of characters per line. - emoji (bool, optional): Also render emoji code. Defaults to True. - justify (str, optional): Justify method: "default", "left", "center", "full", "right". Defaults to "default". - overflow (str, optional): Overflow method: "crop", "fold", or "ellipsis". Defaults to None. - tab_size (int, optional): Default tab size. Defaults to 8. - no_wrap (bool, optional): Disable wrapping, Defaults to False. - - Returns: - Lines: Number of lines. - """ - wrap_justify = justify or self.justify or DEFAULT_JUSTIFY - wrap_overflow = overflow or self.overflow or DEFAULT_OVERFLOW - - no_wrap = pick_bool(no_wrap, self.no_wrap, False) or overflow == "ignore" - - lines = Lines() - for line in self.split(allow_blank=True): - if "\t" in line: - line.expand_tabs(tab_size) - if no_wrap: - new_lines = Lines([line]) - else: - offsets = divide_line(str(line), width, fold=wrap_overflow == "fold") - new_lines = line.divide(offsets) - for line in new_lines: - line.rstrip_end(width) - if wrap_justify: - new_lines.justify( - console, width, justify=wrap_justify, overflow=wrap_overflow - ) - for line in new_lines: - line.truncate(width, overflow=wrap_overflow) - lines.extend(new_lines) - return lines - - def fit(self, width: int) -> Lines: - """Fit the text in to given width by chopping in to lines. - - Args: - width (int): Maximum characters in a line. - - Returns: - Lines: Lines container. - """ - lines: Lines = Lines() - append = lines.append - for line in self.split(): - line.set_length(width) - append(line) - return lines - - def detect_indentation(self) -> int: - """Auto-detect indentation of code. - - Returns: - int: Number of spaces used to indent code. - """ - - _indentations = { - len(match.group(1)) - for match in re.finditer(r"^( *)(.*)$", self.plain, flags=re.MULTILINE) - } - - try: - indentation = ( - reduce(gcd, [indent for indent in _indentations if not indent % 2]) or 1 - ) - except TypeError: - indentation = 1 - - return indentation - - def with_indent_guides( - self, - indent_size: Optional[int] = None, - *, - character: str = "│", - style: StyleType = "dim green", - ) -> "Text": - """Adds indent guide lines to text. - - Args: - indent_size (Optional[int]): Size of indentation, or None to auto detect. Defaults to None. - character (str, optional): Character to use for indentation. Defaults to "│". - style (Union[Style, str], optional): Style of indent guides. - - Returns: - Text: New text with indentation guides. - """ - - _indent_size = self.detect_indentation() if indent_size is None else indent_size - - text = self.copy() - text.expand_tabs() - indent_line = f"{character}{' ' * (_indent_size - 1)}" - - re_indent = re.compile(r"^( *)(.*)$") - new_lines: List[Text] = [] - add_line = new_lines.append - blank_lines = 0 - for line in text.split(allow_blank=True): - match = re_indent.match(line.plain) - if not match or not match.group(2): - blank_lines += 1 - continue - indent = match.group(1) - full_indents, remaining_space = divmod(len(indent), _indent_size) - new_indent = f"{indent_line * full_indents}{' ' * remaining_space}" - line.plain = new_indent + line.plain[len(new_indent) :] - line.stylize(style, 0, len(new_indent)) - if blank_lines: - new_lines.extend([Text(new_indent, style=style)] * blank_lines) - blank_lines = 0 - add_line(line) - if blank_lines: - new_lines.extend([Text("", style=style)] * blank_lines) - - new_text = text.blank_copy("\n").join(new_lines) - return new_text - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - - text = Text( - """\nLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\n""" - ) - text.highlight_words(["Lorem"], "bold") - text.highlight_words(["ipsum"], "italic") - - console = Console() - - console.rule("justify='left'") - console.print(text, style="red") - console.print() - - console.rule("justify='center'") - console.print(text, style="green", justify="center") - console.print() - - console.rule("justify='right'") - console.print(text, style="blue", justify="right") - console.print() - - console.rule("justify='full'") - console.print(text, style="magenta", justify="full") - console.print() diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/_itertools.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/_itertools.py deleted file mode 100644 index cce05582ffc6fe6d72027194f4ccc44ee42f1fcd..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/_itertools.py +++ /dev/null @@ -1,35 +0,0 @@ -from itertools import filterfalse - -from typing import ( - Callable, - Iterable, - Iterator, - Optional, - Set, - TypeVar, - Union, -) - -# Type and type variable definitions -_T = TypeVar('_T') -_U = TypeVar('_U') - - -def unique_everseen( - iterable: Iterable[_T], key: Optional[Callable[[_T], _U]] = None -) -> Iterator[_T]: - "List unique elements, preserving order. Remember all elements ever seen." - # unique_everseen('AAAABBBCCDAABBB') --> A B C D - # unique_everseen('ABBCcAD', str.lower) --> A B C D - seen: Set[Union[_T, _U]] = set() - seen_add = seen.add - if key is None: - for element in filterfalse(seen.__contains__, iterable): - seen_add(element) - yield element - else: - for element in iterable: - k = key(element) - if k not in seen: - seen_add(k) - yield element diff --git a/spaces/plzdontcry/dakubettergpt/src/hooks/useHideOnOutsideClick.ts b/spaces/plzdontcry/dakubettergpt/src/hooks/useHideOnOutsideClick.ts deleted file mode 100644 index 261aeb287bfc2d39d8a435dee5a68e9920fe172a..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/hooks/useHideOnOutsideClick.ts +++ /dev/null @@ -1,36 +0,0 @@ -import React, { useEffect, useRef, useState } from 'react'; - -const useHideOnOutsideClick = (): [ - boolean, - React.Dispatch>, - React.RefObject -] => { - const elementRef = useRef(null); - const [showElement, setShowElement] = useState(false); - - const handleClickOutside = (event: MouseEvent) => { - if ( - elementRef.current && - !elementRef.current.contains(event.target as Node) - ) { - setShowElement(false); - } - }; - - useEffect(() => { - // Bind the event listener only if the element is show. - if (showElement) { - document.addEventListener('mousedown', handleClickOutside); - } else { - document.removeEventListener('mousedown', handleClickOutside); - } - - return () => { - document.removeEventListener('mousedown', handleClickOutside); - }; - }, [showElement, elementRef]); - - return [showElement, setShowElement, elementRef]; -}; - -export default useHideOnOutsideClick; diff --git a/spaces/pragnakalp/bert_based_ner/README.md b/spaces/pragnakalp/bert_based_ner/README.md deleted file mode 100644 index 0c78eb27d0c4825823a815d581a7a251306b6cbc..0000000000000000000000000000000000000000 --- a/spaces/pragnakalp/bert_based_ner/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bert Based Ner -emoji: 👀 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pratikshapatil0220/GenarativeAIChatBot/README.md b/spaces/pratikshapatil0220/GenarativeAIChatBot/README.md deleted file mode 100644 index 161d5fde3cac6cf472d30aeded2c39f42e4922a2..0000000000000000000000000000000000000000 --- a/spaces/pratikshapatil0220/GenarativeAIChatBot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GenarativeAIChatBot -emoji: 📊 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/prerna9811/Chord/portaudio/doc/utils/checkfiledocs.py b/spaces/prerna9811/Chord/portaudio/doc/utils/checkfiledocs.py deleted file mode 100644 index 5d6b58518f7c97eed0e37c9a08fbcf14b3377f89..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/doc/utils/checkfiledocs.py +++ /dev/null @@ -1,87 +0,0 @@ -import os -import os.path -import string - -paRootDirectory = '../../' -paHtmlDocDirectory = os.path.join( paRootDirectory, "doc", "html" ) - -## Script to check documentation status -## this script assumes that html doxygen documentation has been generated -## -## it then walks the entire portaudio source tree and check that -## - every source file (.c,.h,.cpp) has a doxygen comment block containing -## - a @file directive -## - a @brief directive -## - a @ingroup directive -## - it also checks that a corresponding html documentation file has been generated. -## -## This can be used as a first-level check to make sure the documentation is in order. -## -## The idea is to get a list of which files are missing doxygen documentation. -## -## How to run: -## $ cd doc/utils -## $ python checkfiledocs.py - -def oneOf_a_in_b(a, b): - for x in a: - if x in b: - return True - return False - -# recurse from top and return a list of all with the given -# extensions. ignore .svn directories. return absolute paths -def recursiveFindFiles( top, extensions, dirBlacklist, includePaths ): - result = [] - for (dirpath, dirnames, filenames) in os.walk(top): - if not oneOf_a_in_b(dirBlacklist, dirpath): - for f in filenames: - if os.path.splitext(f)[1] in extensions: - if includePaths: - result.append( os.path.abspath( os.path.join( dirpath, f ) ) ) - else: - result.append( f ) - return result - -# generate the html file name that doxygen would use for -# a particular source file. this is a brittle conversion -# which i worked out by trial and error -def doxygenHtmlDocFileName( sourceFile ): - return sourceFile.replace( '_', '__' ).replace( '.', '_8' ) + '.html' - - -sourceFiles = recursiveFindFiles( os.path.join(paRootDirectory,'src'), [ '.c', '.h', '.cpp' ], ['.svn', 'mingw-include'], True ); -sourceFiles += recursiveFindFiles( os.path.join(paRootDirectory,'include'), [ '.c', '.h', '.cpp' ], ['.svn'], True ); -docFiles = recursiveFindFiles( paHtmlDocDirectory, [ '.html' ], ['.svn'], False ); - - - -currentFile = "" - -def printError( f, message ): - global currentFile - if f != currentFile: - currentFile = f - print f, ":" - print "\t!", message - - -for f in sourceFiles: - if not doxygenHtmlDocFileName( os.path.basename(f) ) in docFiles: - printError( f, "no doxygen generated doc page" ) - - s = file( f, 'rt' ).read() - - if not '/**' in s: - printError( f, "no doxygen /** block" ) - - if not '@file' in s: - printError( f, "no doxygen @file tag" ) - - if not '@brief' in s: - printError( f, "no doxygen @brief tag" ) - - if not '@ingroup' in s: - printError( f, "no doxygen @ingroup tag" ) - - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImageGrab.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImageGrab.py deleted file mode 100644 index bcfffc3dc137a724d92c9cde039e6f855a23d436..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImageGrab.py +++ /dev/null @@ -1,177 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# screen grabber -# -# History: -# 2001-04-26 fl created -# 2001-09-17 fl use builtin driver, if present -# 2002-11-19 fl added grabclipboard support -# -# Copyright (c) 2001-2002 by Secret Labs AB -# Copyright (c) 2001-2002 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import io -import os -import shutil -import subprocess -import sys -import tempfile - -from . import Image - - -def grab(bbox=None, include_layered_windows=False, all_screens=False, xdisplay=None): - if xdisplay is None: - if sys.platform == "darwin": - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - args = ["screencapture"] - if bbox: - left, top, right, bottom = bbox - args += ["-R", f"{left},{top},{right-left},{bottom-top}"] - subprocess.call(args + ["-x", filepath]) - im = Image.open(filepath) - im.load() - os.unlink(filepath) - if bbox: - im_resized = im.resize((right - left, bottom - top)) - im.close() - return im_resized - return im - elif sys.platform == "win32": - offset, size, data = Image.core.grabscreen_win32( - include_layered_windows, all_screens - ) - im = Image.frombytes( - "RGB", - size, - data, - # RGB, 32-bit line padding, origin lower left corner - "raw", - "BGR", - (size[0] * 3 + 3) & -4, - -1, - ) - if bbox: - x0, y0 = offset - left, top, right, bottom = bbox - im = im.crop((left - x0, top - y0, right - x0, bottom - y0)) - return im - try: - if not Image.core.HAVE_XCB: - msg = "Pillow was built without XCB support" - raise OSError(msg) - size, data = Image.core.grabscreen_x11(xdisplay) - except OSError: - if ( - xdisplay is None - and sys.platform not in ("darwin", "win32") - and shutil.which("gnome-screenshot") - ): - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - subprocess.call(["gnome-screenshot", "-f", filepath]) - im = Image.open(filepath) - im.load() - os.unlink(filepath) - if bbox: - im_cropped = im.crop(bbox) - im.close() - return im_cropped - return im - else: - raise - else: - im = Image.frombytes("RGB", size, data, "raw", "BGRX", size[0] * 4, 1) - if bbox: - im = im.crop(bbox) - return im - - -def grabclipboard(): - if sys.platform == "darwin": - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - commands = [ - 'set theFile to (open for access POSIX file "' - + filepath - + '" with write permission)', - "try", - " write (the clipboard as «class PNGf») to theFile", - "end try", - "close access theFile", - ] - script = ["osascript"] - for command in commands: - script += ["-e", command] - subprocess.call(script) - - im = None - if os.stat(filepath).st_size != 0: - im = Image.open(filepath) - im.load() - os.unlink(filepath) - return im - elif sys.platform == "win32": - fmt, data = Image.core.grabclipboard_win32() - if fmt == "file": # CF_HDROP - import struct - - o = struct.unpack_from("I", data)[0] - if data[16] != 0: - files = data[o:].decode("utf-16le").split("\0") - else: - files = data[o:].decode("mbcs").split("\0") - return files[: files.index("")] - if isinstance(data, bytes): - data = io.BytesIO(data) - if fmt == "png": - from . import PngImagePlugin - - return PngImagePlugin.PngImageFile(data) - elif fmt == "DIB": - from . import BmpImagePlugin - - return BmpImagePlugin.DibImageFile(data) - return None - else: - if os.getenv("WAYLAND_DISPLAY"): - session_type = "wayland" - elif os.getenv("DISPLAY"): - session_type = "x11" - else: # Session type check failed - session_type = None - - if shutil.which("wl-paste") and session_type in ("wayland", None): - output = subprocess.check_output(["wl-paste", "-l"]).decode() - mimetypes = output.splitlines() - if "image/png" in mimetypes: - mimetype = "image/png" - elif mimetypes: - mimetype = mimetypes[0] - else: - mimetype = None - - args = ["wl-paste"] - if mimetype: - args.extend(["-t", mimetype]) - elif shutil.which("xclip") and session_type in ("x11", None): - args = ["xclip", "-selection", "clipboard", "-t", "image/png", "-o"] - else: - msg = "wl-paste or xclip is required for ImageGrab.grabclipboard() on Linux" - raise NotImplementedError(msg) - - p = subprocess.run(args, capture_output=True) - err = p.stderr - if err: - msg = f"{args[0]} error: {err.strip().decode()}" - raise ChildProcessError(msg) - data = io.BytesIO(p.stdout) - im = Image.open(data) - im.load() - return im diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-cb04d13d.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-cb04d13d.js deleted file mode 100644 index 06328e108004b8b67da2f33f126c485fd88127b1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-cb04d13d.js +++ /dev/null @@ -1,8 +0,0 @@ -import{B as cl}from"./Button-89057c03.js";import{u as qe,c as ml}from"./utils-c3e3db58.js";import{d as dl}from"./index-2f00b72c.js";import{S as bl}from"./ShareButton-d3fa81fa.js";import{S as hl}from"./Index-37584f50.js";import"./Index.svelte_svelte_type_style_lang-8ef5c92a.js";import{M as gl}from"./Example.svelte_svelte_type_style_lang-49787a8b.js";import{g as ze,n as wl}from"./index-0526d562.js";import{C as kl,a as vl}from"./Copy-1b5c0932.js";import{B as pl}from"./BlockLabel-e3b0d1c3.js";import"./IconButton-16e5dbea.js";import"./svelte/svelte.js";const{SvelteComponent:Cl,append:Be,attr:M,detach:yl,init:jl,insert:Sl,noop:ve,safe_not_equal:ql,svg_element:pe}=window.__gradio__svelte__internal;function zl(t){let e,i,l;return{c(){e=pe("svg"),i=pe("path"),l=pe("path"),M(i,"fill","currentColor"),M(i,"d","M17.74 30L16 29l4-7h6a2 2 0 0 0 2-2V8a2 2 0 0 0-2-2H6a2 2 0 0 0-2 2v12a2 2 0 0 0 2 2h9v2H6a4 4 0 0 1-4-4V8a4 4 0 0 1 4-4h20a4 4 0 0 1 4 4v12a4 4 0 0 1-4 4h-4.84Z"),M(l,"fill","currentColor"),M(l,"d","M8 10h16v2H8zm0 6h10v2H8z"),M(e,"xmlns","http://www.w3.org/2000/svg"),M(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),M(e,"aria-hidden","true"),M(e,"role","img"),M(e,"class","iconify iconify--carbon"),M(e,"width","100%"),M(e,"height","100%"),M(e,"preserveAspectRatio","xMidYMid meet"),M(e,"viewBox","0 0 32 32")},m(n,r){Sl(n,e,r),Be(e,i),Be(e,l)},p:ve,i:ve,o:ve,d(n){n&&yl(e)}}}class Bl extends Cl{constructor(e){super(),jl(this,e,null,zl,ql,{})}}const{SvelteComponent:Hl,append:He,attr:y,detach:Ml,init:Ll,insert:Tl,noop:Me,safe_not_equal:El,svg_element:Ce}=window.__gradio__svelte__internal;function Al(t){let e,i,l,n;return{c(){e=Ce("svg"),i=Ce("path"),l=Ce("path"),y(i,"stroke","currentColor"),y(i,"stroke-width","1.5"),y(i,"stroke-linecap","round"),y(i,"d","M16.472 3.5H4.1a.6.6 0 0 0-.6.6v9.8a.6.6 0 0 0 .6.6h2.768a2 2 0 0 1 1.715.971l2.71 4.517a1.631 1.631 0 0 0 2.961-1.308l-1.022-3.408a.6.6 0 0 1 .574-.772h4.575a2 2 0 0 0 1.93-2.526l-1.91-7A2 2 0 0 0 16.473 3.5Z"),y(l,"stroke","currentColor"),y(l,"stroke-width","1.5"),y(l,"stroke-linecap","round"),y(l,"stroke-linejoin","round"),y(l,"d","M7 14.5v-11"),y(e,"xmlns","http://www.w3.org/2000/svg"),y(e,"width","15px"),y(e,"height","15px"),y(e,"viewBox","0 0 24 24"),y(e,"fill",n=t[0]?"currentColor":"none"),y(e,"stroke-width","1.5"),y(e,"color","currentColor")},m(r,o){Tl(r,e,o),He(e,i),He(e,l)},p(r,[o]){o&1&&n!==(n=r[0]?"currentColor":"none")&&y(e,"fill",n)},i:Me,o:Me,d(r){r&&Ml(e)}}}function Dl(t,e,i){let{actioned:l}=e;return t.$$set=n=>{"actioned"in n&&i(0,l=n.actioned)},[l]}class Nl extends Hl{constructor(e){super(),Ll(this,e,Dl,Al,El,{actioned:0})}}const{SvelteComponent:Pl,append:Le,attr:j,detach:Il,init:Ul,insert:Vl,noop:Te,safe_not_equal:Zl,svg_element:ye}=window.__gradio__svelte__internal;function Fl(t){let e,i,l,n;return{c(){e=ye("svg"),i=ye("path"),l=ye("path"),j(i,"stroke","currentColor"),j(i,"stroke-width","1.5"),j(i,"stroke-linecap","round"),j(i,"d","M16.472 20H4.1a.6.6 0 0 1-.6-.6V9.6a.6.6 0 0 1 .6-.6h2.768a2 2 0 0 0 1.715-.971l2.71-4.517a1.631 1.631 0 0 1 2.961 1.308l-1.022 3.408a.6.6 0 0 0 .574.772h4.575a2 2 0 0 1 1.93 2.526l-1.91 7A2 2 0 0 1 16.473 20Z"),j(l,"stroke","currentColor"),j(l,"stroke-width","1.5"),j(l,"stroke-linecap","round"),j(l,"stroke-linejoin","round"),j(l,"d","M7 20V9"),j(e,"xmlns","http://www.w3.org/2000/svg"),j(e,"width","15px"),j(e,"height","15px"),j(e,"viewBox","0 0 24 24"),j(e,"fill",n=t[0]?"currentColor":"none"),j(e,"stroke-width","1.5"),j(e,"color","currentColor")},m(r,o){Vl(r,e,o),Le(e,i),Le(e,l)},p(r,[o]){o&1&&n!==(n=r[0]?"currentColor":"none")&&j(e,"fill",n)},i:Te,o:Te,d(r){r&&Il(e)}}}function Ol(t,e,i){let{actioned:l}=e;return t.$$set=n=>{"actioned"in n&&i(0,l=n.actioned)},[l]}class Rl extends Pl{constructor(e){super(),Ul(this,e,Ol,Fl,Zl,{actioned:0})}}const Yl=async t=>(await Promise.all(t.map(async i=>await Promise.all(i.map(async(l,n)=>{if(l===null)return"";let r=n===0?"😃":"🤖",o="";if(typeof l=="string"){const s={audio:/|!\[.*?\]\((\/file=.*?)\)/g};o=l;for(let[a,u]of Object.entries(s)){let f;for(;(f=u.exec(l))!==null;){const w=f[1]||f[2],L=await qe(w,"url");o=o.replace(w,L)}}}else{if(!l?.url)return"";const s=await qe(l.url,"url");l.mime_type?.includes("audio")?o=``:l.mime_type?.includes("video")?o=s:l.mime_type?.includes("image")&&(o=``)}return`${r}: ${o}`}))))).map(i=>i.join(i[0]!==""&&i[1]!==""?` -`:"")).join(` -`);const{SvelteComponent:Gl,append:Jl,attr:se,check_outros:Ee,create_component:ol,destroy_component:rl,detach:Kl,element:Ql,group_outros:Ae,init:Wl,insert:Xl,listen:$l,mount_component:sl,safe_not_equal:xl,space:et,transition_in:J,transition_out:$}=window.__gradio__svelte__internal,{onDestroy:lt}=window.__gradio__svelte__internal;function De(t){let e,i;return e=new kl({}),{c(){ol(e.$$.fragment)},m(l,n){sl(e,l,n),i=!0},i(l){i||(J(e.$$.fragment,l),i=!0)},o(l){$(e.$$.fragment,l),i=!1},d(l){rl(e,l)}}}function Ne(t){let e,i;return e=new vl({}),{c(){ol(e.$$.fragment)},m(l,n){sl(e,l,n),i=!0},i(l){i||(J(e.$$.fragment,l),i=!0)},o(l){$(e.$$.fragment,l),i=!1},d(l){rl(e,l)}}}function tt(t){let e,i,l,n,r,o,s=!t[0]&&De(),a=t[0]&&Ne();return{c(){e=Ql("button"),s&&s.c(),i=et(),a&&a.c(),se(e,"title","copy"),se(e,"aria-label",l=t[0]?"Copied message":"Copy message"),se(e,"class","svelte-11hlfrc")},m(u,f){Xl(u,e,f),s&&s.m(e,null),Jl(e,i),a&&a.m(e,null),n=!0,r||(o=$l(e,"click",t[1]),r=!0)},p(u,[f]){u[0]?s&&(Ae(),$(s,1,1,()=>{s=null}),Ee()):s?f&1&&J(s,1):(s=De(),s.c(),J(s,1),s.m(e,i)),u[0]?a?f&1&&J(a,1):(a=Ne(),a.c(),J(a,1),a.m(e,null)):a&&(Ae(),$(a,1,1,()=>{a=null}),Ee()),(!n||f&1&&l!==(l=u[0]?"Copied message":"Copy message"))&&se(e,"aria-label",l)},i(u){n||(J(s),J(a),n=!0)},o(u){$(s),$(a),n=!1},d(u){u&&Kl(e),s&&s.d(),a&&a.d(),r=!1,o()}}}function nt(t,e,i){let l=!1,{value:n}=e,r;function o(){i(0,l=!0),r&&clearTimeout(r),r=setTimeout(()=>{i(0,l=!1)},2e3)}async function s(){if("clipboard"in navigator)await navigator.clipboard.writeText(n),o();else{const a=document.createElement("textarea");a.value=n,a.style.position="absolute",a.style.left="-999999px",document.body.prepend(a),a.select();try{document.execCommand("copy"),o()}catch(u){console.error(u)}finally{a.remove()}}}return lt(()=>{r&&clearTimeout(r)}),t.$$set=a=>{"value"in a&&i(2,n=a.value)},[l,s,n]}class it extends Gl{constructor(e){super(),Wl(this,e,nt,tt,xl,{value:2})}}const{SvelteComponent:at,attr:ie,create_component:ot,destroy_component:rt,detach:st,element:ut,init:ft,insert:_t,listen:Pe,mount_component:ct,run_all:mt,safe_not_equal:dt,transition_in:bt,transition_out:ht}=window.__gradio__svelte__internal;function gt(t){let e,i,l,n,r,o,s;return i=new t[3]({props:{actioned:t[2]}}),{c(){e=ut("button"),ot(i.$$.fragment),ie(e,"title",l=t[0]+" message"),ie(e,"aria-label",n=t[2]?`clicked ${t[0]}`:t[0]),ie(e,"class","svelte-3snf3m")},m(a,u){_t(a,e,u),ct(i,e,null),r=!0,o||(s=[Pe(e,"click",t[5]),Pe(e,"keydown",t[6])],o=!0)},p(a,[u]){const f={};u&4&&(f.actioned=a[2]),i.$set(f),(!r||u&1&&l!==(l=a[0]+" message"))&&ie(e,"title",l),(!r||u&5&&n!==(n=a[2]?`clicked ${a[0]}`:a[0]))&&ie(e,"aria-label",n)},i(a){r||(bt(i.$$.fragment,a),r=!0)},o(a){ht(i.$$.fragment,a),r=!1},d(a){a&&st(e),rt(i),o=!1,mt(s)}}}function wt(t,e,i){let{action:l}=e,{handle_action:n}=e,r=!1,o=l==="like"?Rl:Nl;function s(){i(2,r=!0)}const a=()=>{s(),n()},u=f=>{f.key==="Enter"&&(s(),n())};return t.$$set=f=>{"action"in f&&i(0,l=f.action),"handle_action"in f&&i(1,n=f.handle_action)},[l,n,r,o,s,a,u]}class Ie extends at{constructor(e){super(),ft(this,e,wt,gt,dt,{action:0,handle_action:1})}}const{SvelteComponent:kt,attr:ue,detach:vt,element:pt,init:Ct,insert:yt,noop:Ue,safe_not_equal:jt,set_style:Ve}=window.__gradio__svelte__internal;function St(t){let e;return{c(){e=pt("div"),e.innerHTML=`Loading content
    -   -
    -   -
    `,ue(e,"class","message pending svelte-1ofy3w8"),ue(e,"role","status"),ue(e,"aria-label","Loading response"),ue(e,"aria-live","polite"),Ve(e,"border-radius",t[0]==="bubble"?"var(--radius-xxl)":"none")},m(i,l){yt(i,e,l)},p(i,[l]){l&1&&Ve(e,"border-radius",i[0]==="bubble"?"var(--radius-xxl)":"none")},i:Ue,o:Ue,d(i){i&&vt(e)}}}function qt(t,e,i){let{layout:l="bubble"}=e;return t.$$set=n=>{"layout"in n&&i(0,l=n.layout)},[l]}class zt extends kt{constructor(e){super(),Ct(this,e,qt,St,jt,{layout:0})}}const{SvelteComponent:Bt,action_destroyer:Ht,append:N,attr:m,binding_callbacks:Mt,bubble:G,check_outros:U,create_component:ee,destroy_component:le,destroy_each:ul,detach:B,element:z,empty:je,ensure_array_like:fe,group_outros:V,init:Lt,insert:H,listen:Q,mount_component:te,noop:W,null_to_empty:Ze,run_all:Se,safe_not_equal:Tt,set_data:Et,set_style:At,space:x,src_url_equal:X,text:Dt,toggle_class:C,transition_in:d,transition_out:k}=window.__gradio__svelte__internal,{beforeUpdate:Nt,afterUpdate:Pt,createEventDispatcher:It}=window.__gradio__svelte__internal;function Fe(t,e,i){const l=t.slice();return l[37]=e[i],l[39]=i,l}function Oe(t,e,i){const l=t.slice();return l[40]=e[i],l[42]=i,l}function Re(t){let e,i,l;return i=new bl({props:{i18n:t[15],formatter:Yl,value:t[0]}}),i.$on("error",t[28]),i.$on("share",t[29]),{c(){e=z("div"),ee(i.$$.fragment),m(e,"class","share-button svelte-1pjfiar")},m(n,r){H(n,e,r),te(i,e,null),l=!0},p(n,r){const o={};r[0]&32768&&(o.i18n=n[15]),r[0]&1&&(o.value=n[0]),i.$set(o)},i(n){l||(d(i.$$.fragment,n),l=!0)},o(n){k(i.$$.fragment,n),l=!1},d(n){n&&B(e),le(i)}}}function Ye(t){let e,i,l=fe(t[0]),n=[];for(let o=0;ok(n[o],1,1,()=>{n[o]=null});return{c(){for(let o=0;o{q[Y]=null}),U()),~a?(u=q[a],u?u.p(t,h):(u=q[a]=F[a](t),u.c()),d(u,1),u.m(n,null)):u=null),(!v||h[0]&64&&f!==(f=t[6]?"rtl":"ltr"))&&m(n,"dir",f),(!v||h[0]&1&&w!==(w=(t[42]==0?"user":"bot")+"'s message:' "+t[40]))&&m(n,"aria-label",w),(!v||h[0]&1)&&C(n,"latest",t[39]===t[0].length-1),(!v||h[0]&2048)&&C(n,"message-markdown-disabled",!t[11]),(!v||h[0]&8)&&C(n,"selectable",t[3]),(!v||h[0]&66560)&&C(l,"message-fit",t[16]==="bubble"&&!t[10]),(!v||h[0]&65536)&&C(l,"panel-full-width",t[16]==="panel"),(!v||h[0]&65536)&&C(l,"message-bubble-border",t[16]==="bubble"),(!v||h[0]&2048)&&C(l,"message-markdown-disabled",!t[11]),t[4]&&t[42]!==0||t[7]&&t[40]&&typeof t[40]=="string"?b?(b.p(t,h),h[0]&145&&d(b,1)):(b=Ke(t),b.c(),d(b,1),b.m(e,P)):b&&(V(),k(b,1,1,()=>{b=null}),U()),(!v||h[0]&65536&&A!==(A="message-row "+t[16]+" "+(t[42]==0?"user-row":"bot-row")+" svelte-1pjfiar"))&&m(e,"class",A)},i(g){v||(d(u),d(b),v=!0)},o(g){k(u),k(b),v=!1},d(g){g&&B(e),p&&p.d(),~a&&q[a].d(),b&&b.d(),I=!1,Se(Z)}}}function Je(t){let e,i,l;return{c(){e=z("div"),i=z("img"),m(i,"class","avatar-image svelte-1pjfiar"),X(i.src,l=ze(t[8][t[42]],t[13],t[14]))||m(i,"src",l),m(i,"alt",(t[42]==0?"user":"bot")+" avatar"),m(e,"class","avatar-container svelte-1pjfiar")},m(n,r){H(n,e,r),N(e,i)},p(n,r){r[0]&24832&&!X(i.src,l=ze(n[8][n[42]],n[13],n[14]))&&m(i,"src",l)},d(n){n&&B(e)}}}function Ut(t){let e,i;return e=new zt({props:{layout:t[16]}}),{c(){ee(e.$$.fragment)},m(l,n){te(e,l,n),i=!0},p(l,n){const r={};n[0]&65536&&(r.layout=l[16]),e.$set(r)},i(l){i||(d(e.$$.fragment,l),i=!0)},o(l){k(e.$$.fragment,l),i=!1},d(l){le(e,l)}}}function Vt(t){let e,i=(t[40].file?.orig_name||t[40].file?.path)+"",l,n,r;return{c(){e=z("a"),l=Dt(i),m(e,"data-testid","chatbot-file"),m(e,"href",n=t[40].file?.url),m(e,"target","_blank"),m(e,"download",r=window.__is_colab__?null:t[40].file?.orig_name||t[40].file?.path),m(e,"class","svelte-1pjfiar")},m(o,s){H(o,e,s),N(e,l)},p(o,s){s[0]&1&&i!==(i=(o[40].file?.orig_name||o[40].file?.path)+"")&&Et(l,i),s[0]&1&&n!==(n=o[40].file?.url)&&m(e,"href",n),s[0]&1&&r!==(r=window.__is_colab__?null:o[40].file?.orig_name||o[40].file?.path)&&m(e,"download",r)},i:W,o:W,d(o){o&&B(e)}}}function Zt(t){let e,i,l;return{c(){e=z("img"),m(e,"data-testid","chatbot-image"),X(e.src,i=t[40].file?.url)||m(e,"src",i),m(e,"alt",l=t[40].alt_text),m(e,"class","svelte-1pjfiar")},m(n,r){H(n,e,r)},p(n,r){r[0]&1&&!X(e.src,i=n[40].file?.url)&&m(e,"src",i),r[0]&1&&l!==(l=n[40].alt_text)&&m(e,"alt",l)},i:W,o:W,d(n){n&&B(e)}}}function Ft(t){let e,i,l,n,r,o;return{c(){e=z("video"),i=z("track"),m(i,"kind","captions"),m(i,"class","svelte-1pjfiar"),m(e,"data-testid","chatbot-video"),e.controls=!0,X(e.src,l=t[40].file?.url)||m(e,"src",l),m(e,"title",n=t[40].alt_text),m(e,"preload","auto"),m(e,"class","svelte-1pjfiar")},m(s,a){H(s,e,a),N(e,i),r||(o=[Q(e,"play",t[25]),Q(e,"pause",t[26]),Q(e,"ended",t[27])],r=!0)},p(s,a){a[0]&1&&!X(e.src,l=s[40].file?.url)&&m(e,"src",l),a[0]&1&&n!==(n=s[40].alt_text)&&m(e,"title",n)},i:W,o:W,d(s){s&&B(e),r=!1,Se(o)}}}function Ot(t){let e,i,l,n,r;return{c(){e=z("audio"),m(e,"data-testid","chatbot-audio"),e.controls=!0,m(e,"preload","metadata"),X(e.src,i=t[40].file?.url)||m(e,"src",i),m(e,"title",l=t[40].alt_text),m(e,"class","svelte-1pjfiar")},m(o,s){H(o,e,s),n||(r=[Q(e,"play",t[22]),Q(e,"pause",t[23]),Q(e,"ended",t[24])],n=!0)},p(o,s){s[0]&1&&!X(e.src,i=o[40].file?.url)&&m(e,"src",i),s[0]&1&&l!==(l=o[40].alt_text)&&m(e,"title",l)},i:W,o:W,d(o){o&&B(e),n=!1,Se(r)}}}function Rt(t){let e,i;return e=new gl({props:{message:t[40],latex_delimiters:t[1],sanitize_html:t[9],render_markdown:t[11],line_breaks:t[12]}}),e.$on("load",t[18]),{c(){ee(e.$$.fragment)},m(l,n){te(e,l,n),i=!0},p(l,n){const r={};n[0]&1&&(r.message=l[40]),n[0]&2&&(r.latex_delimiters=l[1]),n[0]&512&&(r.sanitize_html=l[9]),n[0]&2048&&(r.render_markdown=l[11]),n[0]&4096&&(r.line_breaks=l[12]),e.$set(r)},i(l){i||(d(e.$$.fragment,l),i=!0)},o(l){k(e.$$.fragment,l),i=!1},d(l){le(e,l)}}}function Ke(t){let e,i,l,n,r=t[4]&&t[42]==1&&Qe(t),o=t[7]&&t[40]&&typeof t[40]=="string"&&We(t);return{c(){e=z("div"),r&&r.c(),i=x(),o&&o.c(),m(e,"class",l="message-buttons-"+(t[42]==0?"user":"bot")+" message-buttons-"+t[16]+" "+(t[8][t[42]]!==null&&"with-avatar")+" svelte-1pjfiar"),C(e,"message-buttons-fit",t[16]==="bubble"&&!t[10]),C(e,"bubble-buttons-user",t[16]==="bubble")},m(s,a){H(s,e,a),r&&r.m(e,null),N(e,i),o&&o.m(e,null),n=!0},p(s,a){s[4]&&s[42]==1?r?(r.p(s,a),a[0]&16&&d(r,1)):(r=Qe(s),r.c(),d(r,1),r.m(e,i)):r&&(V(),k(r,1,1,()=>{r=null}),U()),s[7]&&s[40]&&typeof s[40]=="string"?o?(o.p(s,a),a[0]&129&&d(o,1)):(o=We(s),o.c(),d(o,1),o.m(e,null)):o&&(V(),k(o,1,1,()=>{o=null}),U()),(!n||a[0]&65792&&l!==(l="message-buttons-"+(s[42]==0?"user":"bot")+" message-buttons-"+s[16]+" "+(s[8][s[42]]!==null&&"with-avatar")+" svelte-1pjfiar"))&&m(e,"class",l),(!n||a[0]&66816)&&C(e,"message-buttons-fit",s[16]==="bubble"&&!s[10]),(!n||a[0]&65792)&&C(e,"bubble-buttons-user",s[16]==="bubble")},i(s){n||(d(r),d(o),n=!0)},o(s){k(r),k(o),n=!1},d(s){s&&B(e),r&&r.d(),o&&o.d()}}}function Qe(t){let e,i,l,n;function r(){return t[32](t[39],t[42],t[40])}e=new Ie({props:{action:"like",handle_action:r}});function o(){return t[33](t[39],t[42],t[40])}return l=new Ie({props:{action:"dislike",handle_action:o}}),{c(){ee(e.$$.fragment),i=x(),ee(l.$$.fragment)},m(s,a){te(e,s,a),H(s,i,a),te(l,s,a),n=!0},p(s,a){t=s;const u={};a[0]&1&&(u.handle_action=r),e.$set(u);const f={};a[0]&1&&(f.handle_action=o),l.$set(f)},i(s){n||(d(e.$$.fragment,s),d(l.$$.fragment,s),n=!0)},o(s){k(e.$$.fragment,s),k(l.$$.fragment,s),n=!1},d(s){s&&B(i),le(e,s),le(l,s)}}}function We(t){let e,i;return e=new it({props:{value:t[40]}}),{c(){ee(e.$$.fragment)},m(l,n){te(e,l,n),i=!0},p(l,n){const r={};n[0]&1&&(r.value=l[40]),e.$set(r)},i(l){i||(d(e.$$.fragment,l),i=!0)},o(l){k(e.$$.fragment,l),i=!1},d(l){le(e,l)}}}function Xe(t){let e,i,l=(t[40]!==null||t[2])&&Ge(t);return{c(){l&&l.c(),e=je()},m(n,r){l&&l.m(n,r),H(n,e,r),i=!0},p(n,r){n[40]!==null||n[2]?l?(l.p(n,r),r[0]&5&&d(l,1)):(l=Ge(n),l.c(),d(l,1),l.m(e.parentNode,e)):l&&(V(),k(l,1,1,()=>{l=null}),U())},i(n){i||(d(l),i=!0)},o(n){k(l),i=!1},d(n){n&&B(e),l&&l.d(n)}}}function $e(t){let e,i,l=fe(t[37]),n=[];for(let o=0;ok(n[o],1,1,()=>{n[o]=null});return{c(){for(let o=0;o0&&Re(t),u=t[0]!==null&&Ye(t);return{c(){a&&a.c(),e=x(),i=z("div"),l=z("div"),u&&u.c(),m(l,"class","message-wrap svelte-1pjfiar"),C(l,"bubble-gap",t[16]==="bubble"),m(i,"class",n=Ze(t[16]==="bubble"?"bubble-wrap":"panel-wrap")+" svelte-1pjfiar"),m(i,"role","log"),m(i,"aria-label","chatbot conversation"),m(i,"aria-live","polite")},m(f,w){a&&a.m(f,w),H(f,e,w),H(f,i,w),N(i,l),u&&u.m(l,null),t[34](i),r=!0,o||(s=Ht(ml.call(null,l)),o=!0)},p(f,w){f[5]&&f[0]!==null&&f[0].length>0?a?(a.p(f,w),w[0]&33&&d(a,1)):(a=Re(f),a.c(),d(a,1),a.m(e.parentNode,e)):a&&(V(),k(a,1,1,()=>{a=null}),U()),f[0]!==null?u?(u.p(f,w),w[0]&1&&d(u,1)):(u=Ye(f),u.c(),d(u,1),u.m(l,null)):u&&(V(),k(u,1,1,()=>{u=null}),U()),(!r||w[0]&65536)&&C(l,"bubble-gap",f[16]==="bubble"),(!r||w[0]&65536&&n!==(n=Ze(f[16]==="bubble"?"bubble-wrap":"panel-wrap")+" svelte-1pjfiar"))&&m(i,"class",n)},i(f){r||(d(a),d(u),r=!0)},o(f){k(a),k(u),r=!1},d(f){f&&(B(e),B(i)),a&&a.d(f),u&&u.d(),t[34](null),o=!1,s()}}}function Gt(t,e,i){let{value:l}=e,n=null,{latex_delimiters:r}=e,{pending_message:o=!1}=e,{selectable:s=!1}=e,{likeable:a=!1}=e,{show_share_button:u=!1}=e,{rtl:f=!1}=e,{show_copy_button:w=!1}=e,{avatar_images:L=[null,null]}=e,{sanitize_html:P=!0}=e,{bubble_full_width:A=!0}=e,{render_markdown:v=!0}=e,{line_breaks:I=!0}=e,{root:Z}=e,{proxy_url:p}=e,{i18n:F}=e,{layout:q="bubble"}=e,S,O;const R=It();Nt(()=>{O=S&&S.offsetHeight+S.scrollTop>S.scrollHeight-100});const b=()=>{O&&S.scrollTo(0,S.scrollHeight)};Pt(()=>{O&&(b(),S.querySelectorAll("img").forEach(c=>{c.addEventListener("load",()=>{b()})}))});function g(c,T,E){R("select",{index:[c,T],value:E})}function h(c,T,E,ke){R("like",{index:[c,T],value:E,liked:ke})}function Y(c){G.call(this,t,c)}function ae(c){G.call(this,t,c)}function oe(c){G.call(this,t,c)}function re(c){G.call(this,t,c)}function de(c){G.call(this,t,c)}function be(c){G.call(this,t,c)}function he(c){G.call(this,t,c)}function ge(c){G.call(this,t,c)}const we=(c,T,E)=>g(c,T,E),_=(c,T,E,ke)=>{ke.key==="Enter"&&g(c,T,E)},ne=(c,T,E)=>h(c,T,E,!0),fl=(c,T,E)=>h(c,T,E,!1);function _l(c){Mt[c?"unshift":"push"](()=>{S=c,i(17,S)})}return t.$$set=c=>{"value"in c&&i(0,l=c.value),"latex_delimiters"in c&&i(1,r=c.latex_delimiters),"pending_message"in c&&i(2,o=c.pending_message),"selectable"in c&&i(3,s=c.selectable),"likeable"in c&&i(4,a=c.likeable),"show_share_button"in c&&i(5,u=c.show_share_button),"rtl"in c&&i(6,f=c.rtl),"show_copy_button"in c&&i(7,w=c.show_copy_button),"avatar_images"in c&&i(8,L=c.avatar_images),"sanitize_html"in c&&i(9,P=c.sanitize_html),"bubble_full_width"in c&&i(10,A=c.bubble_full_width),"render_markdown"in c&&i(11,v=c.render_markdown),"line_breaks"in c&&i(12,I=c.line_breaks),"root"in c&&i(13,Z=c.root),"proxy_url"in c&&i(14,p=c.proxy_url),"i18n"in c&&i(15,F=c.i18n),"layout"in c&&i(16,q=c.layout)},t.$$.update=()=>{t.$$.dirty[0]&2097153&&(dl(l,n)||(i(21,n=l),R("change")))},[l,r,o,s,a,u,f,w,L,P,A,v,I,Z,p,F,q,S,b,g,h,n,Y,ae,oe,re,de,be,he,ge,we,_,ne,fl,_l]}class Jt extends Bt{constructor(e){super(),Lt(this,e,Gt,Yt,Tt,{value:0,latex_delimiters:1,pending_message:2,selectable:3,likeable:4,show_share_button:5,rtl:6,show_copy_button:7,avatar_images:8,sanitize_html:9,bubble_full_width:10,render_markdown:11,line_breaks:12,root:13,proxy_url:14,i18n:15,layout:16},null,[-1,-1])}}const Kt=Jt;const{SvelteComponent:Qt,append:Wt,assign:Xt,attr:$t,check_outros:xe,create_component:_e,destroy_component:ce,detach:el,element:xt,get_spread_object:en,get_spread_update:ln,group_outros:ll,init:tn,insert:tl,mount_component:me,safe_not_equal:nn,space:nl,transition_in:D,transition_out:K}=window.__gradio__svelte__internal;function il(t){let e,i;const l=[{autoscroll:t[21].autoscroll},{i18n:t[21].i18n},t[23],{show_progress:t[23].show_progress==="hidden"?"hidden":"minimal"}];let n={};for(let r=0;r{o=null}),xe()),a[7]?s?(s.p(a,u),u[0]&128&&D(s,1)):(s=al(a),s.c(),D(s,1),s.m(i,l)):s&&(ll(),K(s,1,1,()=>{s=null}),xe());const f={};u[0]&2097152&&(f.i18n=a[21].i18n),u[0]&1024&&(f.selectable=a[10]),u[0]&2048&&(f.likeable=a[11]),u[0]&4096&&(f.show_share_button=a[12]),u[0]&33554432&&(f.value=a[25]),u[0]&1048576&&(f.latex_delimiters=a[20]),u[0]&262144&&(f.render_markdown=a[18]),u[0]&8388608&&(f.pending_message=a[23]?.status==="pending"),u[0]&8192&&(f.rtl=a[13]),u[0]&16384&&(f.show_copy_button=a[14]),u[0]&4194304&&(f.avatar_images=a[22]),u[0]&32768&&(f.sanitize_html=a[15]),u[0]&65536&&(f.bubble_full_width=a[16]),u[0]&524288&&(f.line_breaks=a[19]),u[0]&131072&&(f.layout=a[17]),u[0]&512&&(f.proxy_url=a[9]),u[0]&256&&(f.root=a[8]),n.$set(f)},i(a){r||(D(o),D(s),D(n.$$.fragment,a),r=!0)},o(a){K(o),K(s),K(n.$$.fragment,a),r=!1},d(a){a&&(el(e),el(i)),o&&o.d(a),s&&s.d(),ce(n)}}}function on(t){let e,i;return e=new cl({props:{elem_id:t[0],elem_classes:t[1],visible:t[2],padding:!1,scale:t[4],min_width:t[5],height:t[24],allow_overflow:!1,$$slots:{default:[an]},$$scope:{ctx:t}}}),{c(){_e(e.$$.fragment)},m(l,n){me(e,l,n),i=!0},p(l,n){const r={};n[0]&1&&(r.elem_id=l[0]),n[0]&2&&(r.elem_classes=l[1]),n[0]&4&&(r.visible=l[2]),n[0]&16&&(r.scale=l[4]),n[0]&32&&(r.min_width=l[5]),n[0]&16777216&&(r.height=l[24]),n[0]&50331592|n[1]&4&&(r.$$scope={dirty:n,ctx:l}),e.$set(r)},i(l){i||(D(e.$$.fragment,l),i=!0)},o(l){K(e.$$.fragment,l),i=!1},d(l){ce(e,l)}}}function rn(t,e,i){let{elem_id:l=""}=e,{elem_classes:n=[]}=e,{visible:r=!0}=e,{value:o=[]}=e,{scale:s=null}=e,{min_width:a=void 0}=e,{label:u}=e,{show_label:f=!0}=e,{root:w}=e,{proxy_url:L}=e,{_selectable:P=!1}=e,{likeable:A=!1}=e,{show_share_button:v=!1}=e,{rtl:I=!1}=e,{show_copy_button:Z=!1}=e,{sanitize_html:p=!0}=e,{bubble_full_width:F=!0}=e,{layout:q="bubble"}=e,{render_markdown:S=!0}=e,{line_breaks:O=!0}=e,{latex_delimiters:R}=e,{gradio:b}=e,{avatar_images:g=[null,null]}=e,h;const Y=_=>_.replace('src="/file',`src="${w}file`);function ae(_){return _===null?_:{file:wl(_?.file,w,L),alt_text:_?.alt_text}}let{loading_status:oe=void 0}=e,{height:re=400}=e;const de=()=>b.dispatch("change",o),be=_=>b.dispatch("select",_.detail),he=_=>b.dispatch("like",_.detail),ge=_=>b.dispatch("share",_.detail),we=_=>b.dispatch("error",_.detail);return t.$$set=_=>{"elem_id"in _&&i(0,l=_.elem_id),"elem_classes"in _&&i(1,n=_.elem_classes),"visible"in _&&i(2,r=_.visible),"value"in _&&i(3,o=_.value),"scale"in _&&i(4,s=_.scale),"min_width"in _&&i(5,a=_.min_width),"label"in _&&i(6,u=_.label),"show_label"in _&&i(7,f=_.show_label),"root"in _&&i(8,w=_.root),"proxy_url"in _&&i(9,L=_.proxy_url),"_selectable"in _&&i(10,P=_._selectable),"likeable"in _&&i(11,A=_.likeable),"show_share_button"in _&&i(12,v=_.show_share_button),"rtl"in _&&i(13,I=_.rtl),"show_copy_button"in _&&i(14,Z=_.show_copy_button),"sanitize_html"in _&&i(15,p=_.sanitize_html),"bubble_full_width"in _&&i(16,F=_.bubble_full_width),"layout"in _&&i(17,q=_.layout),"render_markdown"in _&&i(18,S=_.render_markdown),"line_breaks"in _&&i(19,O=_.line_breaks),"latex_delimiters"in _&&i(20,R=_.latex_delimiters),"gradio"in _&&i(21,b=_.gradio),"avatar_images"in _&&i(22,g=_.avatar_images),"loading_status"in _&&i(23,oe=_.loading_status),"height"in _&&i(24,re=_.height)},t.$$.update=()=>{t.$$.dirty[0]&8&&i(25,h=o?o.map(([_,ne])=>[typeof _=="string"?Y(_):ae(_),typeof ne=="string"?Y(ne):ae(ne)]):[])},[l,n,r,o,s,a,u,f,w,L,P,A,v,I,Z,p,F,q,S,O,R,b,g,oe,re,h,de,be,he,ge,we]}class vn extends Qt{constructor(e){super(),tn(this,e,rn,on,nn,{elem_id:0,elem_classes:1,visible:2,value:3,scale:4,min_width:5,label:6,show_label:7,root:8,proxy_url:9,_selectable:10,likeable:11,show_share_button:12,rtl:13,show_copy_button:14,sanitize_html:15,bubble_full_width:16,layout:17,render_markdown:18,line_breaks:19,latex_delimiters:20,gradio:21,avatar_images:22,loading_status:23,height:24},null,[-1,-1])}}export{Kt as BaseChatBot,vn as default}; -//# sourceMappingURL=Index-cb04d13d.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_textpath.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_textpath.py deleted file mode 100644 index e421d2623cadac7931f1f21148bdc0a603c290b1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_textpath.py +++ /dev/null @@ -1,10 +0,0 @@ -import copy - -from matplotlib.textpath import TextPath - - -def test_copy(): - tp = TextPath((0, 0), ".") - assert copy.deepcopy(tp).vertices is not tp.vertices - assert (copy.deepcopy(tp).vertices == tp.vertices).all() - assert copy.copy(tp).vertices is tp.vertices diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py deleted file mode 100644 index 0cd788c5e57399597e3fe4ee1b1bf2af4bffd74b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py +++ /dev/null @@ -1,410 +0,0 @@ -from __future__ import annotations - -from collections import defaultdict -from typing import TYPE_CHECKING -import warnings - -import numpy as np - -from pandas._libs import ( - lib, - parsers, -) -from pandas.compat._optional import import_optional_dependency -from pandas.errors import DtypeWarning -from pandas.util._exceptions import find_stack_level - -from pandas.core.dtypes.common import pandas_dtype -from pandas.core.dtypes.concat import ( - concat_compat, - union_categoricals, -) -from pandas.core.dtypes.dtypes import CategoricalDtype - -from pandas.core.indexes.api import ensure_index_from_sequences - -from pandas.io.common import ( - dedup_names, - is_potential_multi_index, -) -from pandas.io.parsers.base_parser import ( - ParserBase, - ParserError, - is_index_col, -) - -if TYPE_CHECKING: - from collections.abc import ( - Hashable, - Mapping, - Sequence, - ) - - from pandas._typing import ( - ArrayLike, - DtypeArg, - DtypeObj, - ReadCsvBuffer, - ) - - from pandas import ( - Index, - MultiIndex, - ) - - -class CParserWrapper(ParserBase): - low_memory: bool - _reader: parsers.TextReader - - def __init__(self, src: ReadCsvBuffer[str], **kwds) -> None: - super().__init__(kwds) - self.kwds = kwds - kwds = kwds.copy() - - self.low_memory = kwds.pop("low_memory", False) - - # #2442 - # error: Cannot determine type of 'index_col' - kwds["allow_leading_cols"] = ( - self.index_col is not False # type: ignore[has-type] - ) - - # GH20529, validate usecol arg before TextReader - kwds["usecols"] = self.usecols - - # Have to pass int, would break tests using TextReader directly otherwise :( - kwds["on_bad_lines"] = self.on_bad_lines.value - - for key in ( - "storage_options", - "encoding", - "memory_map", - "compression", - ): - kwds.pop(key, None) - - kwds["dtype"] = ensure_dtype_objs(kwds.get("dtype", None)) - if "dtype_backend" not in kwds or kwds["dtype_backend"] is lib.no_default: - kwds["dtype_backend"] = "numpy" - if kwds["dtype_backend"] == "pyarrow": - # Fail here loudly instead of in cython after reading - import_optional_dependency("pyarrow") - self._reader = parsers.TextReader(src, **kwds) - - self.unnamed_cols = self._reader.unnamed_cols - - # error: Cannot determine type of 'names' - passed_names = self.names is None # type: ignore[has-type] - - if self._reader.header is None: - self.names = None - else: - # error: Cannot determine type of 'names' - # error: Cannot determine type of 'index_names' - ( - self.names, # type: ignore[has-type] - self.index_names, - self.col_names, - passed_names, - ) = self._extract_multi_indexer_columns( - self._reader.header, - self.index_names, # type: ignore[has-type] - passed_names, - ) - - # error: Cannot determine type of 'names' - if self.names is None: # type: ignore[has-type] - self.names = list(range(self._reader.table_width)) - - # gh-9755 - # - # need to set orig_names here first - # so that proper indexing can be done - # with _set_noconvert_columns - # - # once names has been filtered, we will - # then set orig_names again to names - # error: Cannot determine type of 'names' - self.orig_names = self.names[:] # type: ignore[has-type] - - if self.usecols: - usecols = self._evaluate_usecols(self.usecols, self.orig_names) - - # GH 14671 - # assert for mypy, orig_names is List or None, None would error in issubset - assert self.orig_names is not None - if self.usecols_dtype == "string" and not set(usecols).issubset( - self.orig_names - ): - self._validate_usecols_names(usecols, self.orig_names) - - # error: Cannot determine type of 'names' - if len(self.names) > len(usecols): # type: ignore[has-type] - # error: Cannot determine type of 'names' - self.names = [ # type: ignore[has-type] - n - # error: Cannot determine type of 'names' - for i, n in enumerate(self.names) # type: ignore[has-type] - if (i in usecols or n in usecols) - ] - - # error: Cannot determine type of 'names' - if len(self.names) < len(usecols): # type: ignore[has-type] - # error: Cannot determine type of 'names' - self._validate_usecols_names( - usecols, - self.names, # type: ignore[has-type] - ) - - # error: Cannot determine type of 'names' - self._validate_parse_dates_presence(self.names) # type: ignore[has-type] - self._set_noconvert_columns() - - # error: Cannot determine type of 'names' - self.orig_names = self.names # type: ignore[has-type] - - if not self._has_complex_date_col: - # error: Cannot determine type of 'index_col' - if self._reader.leading_cols == 0 and is_index_col( - self.index_col # type: ignore[has-type] - ): - self._name_processed = True - ( - index_names, - # error: Cannot determine type of 'names' - self.names, # type: ignore[has-type] - self.index_col, - ) = self._clean_index_names( - # error: Cannot determine type of 'names' - self.names, # type: ignore[has-type] - # error: Cannot determine type of 'index_col' - self.index_col, # type: ignore[has-type] - ) - - if self.index_names is None: - self.index_names = index_names - - if self._reader.header is None and not passed_names: - assert self.index_names is not None - self.index_names = [None] * len(self.index_names) - - self._implicit_index = self._reader.leading_cols > 0 - - def close(self) -> None: - # close handles opened by C parser - try: - self._reader.close() - except ValueError: - pass - - def _set_noconvert_columns(self) -> None: - """ - Set the columns that should not undergo dtype conversions. - - Currently, any column that is involved with date parsing will not - undergo such conversions. - """ - assert self.orig_names is not None - # error: Cannot determine type of 'names' - - # much faster than using orig_names.index(x) xref GH#44106 - names_dict = {x: i for i, x in enumerate(self.orig_names)} - col_indices = [names_dict[x] for x in self.names] # type: ignore[has-type] - # error: Cannot determine type of 'names' - noconvert_columns = self._set_noconvert_dtype_columns( - col_indices, - self.names, # type: ignore[has-type] - ) - for col in noconvert_columns: - self._reader.set_noconvert(col) - - def read( - self, - nrows: int | None = None, - ) -> tuple[ - Index | MultiIndex | None, - Sequence[Hashable] | MultiIndex, - Mapping[Hashable, ArrayLike], - ]: - index: Index | MultiIndex | None - column_names: Sequence[Hashable] | MultiIndex - try: - if self.low_memory: - chunks = self._reader.read_low_memory(nrows) - # destructive to chunks - data = _concatenate_chunks(chunks) - - else: - data = self._reader.read(nrows) - except StopIteration: - if self._first_chunk: - self._first_chunk = False - names = dedup_names( - self.orig_names, - is_potential_multi_index(self.orig_names, self.index_col), - ) - index, columns, col_dict = self._get_empty_meta( - names, - dtype=self.dtype, - ) - columns = self._maybe_make_multi_index_columns(columns, self.col_names) - - if self.usecols is not None: - columns = self._filter_usecols(columns) - - col_dict = {k: v for k, v in col_dict.items() if k in columns} - - return index, columns, col_dict - - else: - self.close() - raise - - # Done with first read, next time raise StopIteration - self._first_chunk = False - - # error: Cannot determine type of 'names' - names = self.names # type: ignore[has-type] - - if self._reader.leading_cols: - if self._has_complex_date_col: - raise NotImplementedError("file structure not yet supported") - - # implicit index, no index names - arrays = [] - - if self.index_col and self._reader.leading_cols != len(self.index_col): - raise ParserError( - "Could not construct index. Requested to use " - f"{len(self.index_col)} number of columns, but " - f"{self._reader.leading_cols} left to parse." - ) - - for i in range(self._reader.leading_cols): - if self.index_col is None: - values = data.pop(i) - else: - values = data.pop(self.index_col[i]) - - values = self._maybe_parse_dates(values, i, try_parse_dates=True) - arrays.append(values) - - index = ensure_index_from_sequences(arrays) - - if self.usecols is not None: - names = self._filter_usecols(names) - - names = dedup_names(names, is_potential_multi_index(names, self.index_col)) - - # rename dict keys - data_tups = sorted(data.items()) - data = {k: v for k, (i, v) in zip(names, data_tups)} - - column_names, date_data = self._do_date_conversions(names, data) - - # maybe create a mi on the columns - column_names = self._maybe_make_multi_index_columns( - column_names, self.col_names - ) - - else: - # rename dict keys - data_tups = sorted(data.items()) - - # ugh, mutation - - # assert for mypy, orig_names is List or None, None would error in list(...) - assert self.orig_names is not None - names = list(self.orig_names) - names = dedup_names(names, is_potential_multi_index(names, self.index_col)) - - if self.usecols is not None: - names = self._filter_usecols(names) - - # columns as list - alldata = [x[1] for x in data_tups] - if self.usecols is None: - self._check_data_length(names, alldata) - - data = {k: v for k, (i, v) in zip(names, data_tups)} - - names, date_data = self._do_date_conversions(names, data) - index, column_names = self._make_index(date_data, alldata, names) - - return index, column_names, date_data - - def _filter_usecols(self, names: Sequence[Hashable]) -> Sequence[Hashable]: - # hackish - usecols = self._evaluate_usecols(self.usecols, names) - if usecols is not None and len(names) != len(usecols): - names = [ - name for i, name in enumerate(names) if i in usecols or name in usecols - ] - return names - - def _maybe_parse_dates(self, values, index: int, try_parse_dates: bool = True): - if try_parse_dates and self._should_parse_dates(index): - values = self._date_conv( - values, - col=self.index_names[index] if self.index_names is not None else None, - ) - return values - - -def _concatenate_chunks(chunks: list[dict[int, ArrayLike]]) -> dict: - """ - Concatenate chunks of data read with low_memory=True. - - The tricky part is handling Categoricals, where different chunks - may have different inferred categories. - """ - names = list(chunks[0].keys()) - warning_columns = [] - - result: dict = {} - for name in names: - arrs = [chunk.pop(name) for chunk in chunks] - # Check each arr for consistent types. - dtypes = {a.dtype for a in arrs} - non_cat_dtypes = {x for x in dtypes if not isinstance(x, CategoricalDtype)} - - dtype = dtypes.pop() - if isinstance(dtype, CategoricalDtype): - result[name] = union_categoricals(arrs, sort_categories=False) - else: - result[name] = concat_compat(arrs) - if len(non_cat_dtypes) > 1 and result[name].dtype == np.dtype(object): - warning_columns.append(str(name)) - - if warning_columns: - warning_names = ",".join(warning_columns) - warning_message = " ".join( - [ - f"Columns ({warning_names}) have mixed types. " - f"Specify dtype option on import or set low_memory=False." - ] - ) - warnings.warn(warning_message, DtypeWarning, stacklevel=find_stack_level()) - return result - - -def ensure_dtype_objs( - dtype: DtypeArg | dict[Hashable, DtypeArg] | None -) -> DtypeObj | dict[Hashable, DtypeObj] | None: - """ - Ensure we have either None, a dtype object, or a dictionary mapping to - dtype objects. - """ - if isinstance(dtype, defaultdict): - # "None" not callable [misc] - default_dtype = pandas_dtype(dtype.default_factory()) # type: ignore[misc] - dtype_converted: defaultdict = defaultdict(lambda: default_dtype) - for key in dtype.keys(): - dtype_converted[key] = pandas_dtype(dtype[key]) - return dtype_converted - elif isinstance(dtype, dict): - return {k: pandas_dtype(dtype[k]) for k in dtype} - elif dtype is not None: - return pandas_dtype(dtype) - return dtype diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/masked/test_arrow_compat.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/masked/test_arrow_compat.py deleted file mode 100644 index fc2094bd9f4a8f32ebdeb3dcdb9b5a1e5d28c20a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/masked/test_arrow_compat.py +++ /dev/null @@ -1,204 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -import pandas._testing as tm - -pa = pytest.importorskip("pyarrow", minversion="1.0.1") - -from pandas.core.arrays.arrow._arrow_utils import pyarrow_array_to_numpy_and_mask - -arrays = [pd.array([1, 2, 3, None], dtype=dtype) for dtype in tm.ALL_INT_EA_DTYPES] -arrays += [pd.array([0.1, 0.2, 0.3, None], dtype=dtype) for dtype in tm.FLOAT_EA_DTYPES] -arrays += [pd.array([True, False, True, None], dtype="boolean")] - - -@pytest.fixture(params=arrays, ids=[a.dtype.name for a in arrays]) -def data(request): - """ - Fixture returning parametrized array from given dtype, including integer, - float and boolean - """ - return request.param - - -def test_arrow_array(data): - arr = pa.array(data) - expected = pa.array( - data.to_numpy(object, na_value=None), - type=pa.from_numpy_dtype(data.dtype.numpy_dtype), - ) - assert arr.equals(expected) - - -def test_arrow_roundtrip(data): - df = pd.DataFrame({"a": data}) - table = pa.table(df) - assert table.field("a").type == str(data.dtype.numpy_dtype) - result = table.to_pandas() - assert result["a"].dtype == data.dtype - tm.assert_frame_equal(result, df) - - -def test_dataframe_from_arrow_types_mapper(): - def types_mapper(arrow_type): - if pa.types.is_boolean(arrow_type): - return pd.BooleanDtype() - elif pa.types.is_integer(arrow_type): - return pd.Int64Dtype() - - bools_array = pa.array([True, None, False], type=pa.bool_()) - ints_array = pa.array([1, None, 2], type=pa.int64()) - small_ints_array = pa.array([-1, 0, 7], type=pa.int8()) - record_batch = pa.RecordBatch.from_arrays( - [bools_array, ints_array, small_ints_array], ["bools", "ints", "small_ints"] - ) - result = record_batch.to_pandas(types_mapper=types_mapper) - bools = pd.Series([True, None, False], dtype="boolean") - ints = pd.Series([1, None, 2], dtype="Int64") - small_ints = pd.Series([-1, 0, 7], dtype="Int64") - expected = pd.DataFrame({"bools": bools, "ints": ints, "small_ints": small_ints}) - tm.assert_frame_equal(result, expected) - - -def test_arrow_load_from_zero_chunks(data): - # GH-41040 - - df = pd.DataFrame({"a": data[0:0]}) - table = pa.table(df) - assert table.field("a").type == str(data.dtype.numpy_dtype) - table = pa.table( - [pa.chunked_array([], type=table.field("a").type)], schema=table.schema - ) - result = table.to_pandas() - assert result["a"].dtype == data.dtype - tm.assert_frame_equal(result, df) - - -def test_arrow_from_arrow_uint(): - # https://github.com/pandas-dev/pandas/issues/31896 - # possible mismatch in types - - dtype = pd.UInt32Dtype() - result = dtype.__from_arrow__(pa.array([1, 2, 3, 4, None], type="int64")) - expected = pd.array([1, 2, 3, 4, None], dtype="UInt32") - - tm.assert_extension_array_equal(result, expected) - - -def test_arrow_sliced(data): - # https://github.com/pandas-dev/pandas/issues/38525 - - df = pd.DataFrame({"a": data}) - table = pa.table(df) - result = table.slice(2, None).to_pandas() - expected = df.iloc[2:].reset_index(drop=True) - tm.assert_frame_equal(result, expected) - - # no missing values - df2 = df.fillna(data[0]) - table = pa.table(df2) - result = table.slice(2, None).to_pandas() - expected = df2.iloc[2:].reset_index(drop=True) - tm.assert_frame_equal(result, expected) - - -@pytest.fixture -def np_dtype_to_arrays(any_real_numpy_dtype): - """ - Fixture returning actual and expected dtype, pandas and numpy arrays and - mask from a given numpy dtype - """ - np_dtype = np.dtype(any_real_numpy_dtype) - pa_type = pa.from_numpy_dtype(np_dtype) - - # None ensures the creation of a bitmask buffer. - pa_array = pa.array([0, 1, 2, None], type=pa_type) - # Since masked Arrow buffer slots are not required to contain a specific - # value, assert only the first three values of the created np.array - np_expected = np.array([0, 1, 2], dtype=np_dtype) - mask_expected = np.array([True, True, True, False]) - return np_dtype, pa_array, np_expected, mask_expected - - -def test_pyarrow_array_to_numpy_and_mask(np_dtype_to_arrays): - """ - Test conversion from pyarrow array to numpy array. - - Modifies the pyarrow buffer to contain padding and offset, which are - considered valid buffers by pyarrow. - - Also tests empty pyarrow arrays with non empty buffers. - See https://github.com/pandas-dev/pandas/issues/40896 - """ - np_dtype, pa_array, np_expected, mask_expected = np_dtype_to_arrays - data, mask = pyarrow_array_to_numpy_and_mask(pa_array, np_dtype) - tm.assert_numpy_array_equal(data[:3], np_expected) - tm.assert_numpy_array_equal(mask, mask_expected) - - mask_buffer = pa_array.buffers()[0] - data_buffer = pa_array.buffers()[1] - data_buffer_bytes = pa_array.buffers()[1].to_pybytes() - - # Add trailing padding to the buffer. - data_buffer_trail = pa.py_buffer(data_buffer_bytes + b"\x00") - pa_array_trail = pa.Array.from_buffers( - type=pa_array.type, - length=len(pa_array), - buffers=[mask_buffer, data_buffer_trail], - offset=pa_array.offset, - ) - pa_array_trail.validate() - data, mask = pyarrow_array_to_numpy_and_mask(pa_array_trail, np_dtype) - tm.assert_numpy_array_equal(data[:3], np_expected) - tm.assert_numpy_array_equal(mask, mask_expected) - - # Add offset to the buffer. - offset = b"\x00" * (pa_array.type.bit_width // 8) - data_buffer_offset = pa.py_buffer(offset + data_buffer_bytes) - mask_buffer_offset = pa.py_buffer(b"\x0E") - pa_array_offset = pa.Array.from_buffers( - type=pa_array.type, - length=len(pa_array), - buffers=[mask_buffer_offset, data_buffer_offset], - offset=pa_array.offset + 1, - ) - pa_array_offset.validate() - data, mask = pyarrow_array_to_numpy_and_mask(pa_array_offset, np_dtype) - tm.assert_numpy_array_equal(data[:3], np_expected) - tm.assert_numpy_array_equal(mask, mask_expected) - - # Empty array - np_expected_empty = np.array([], dtype=np_dtype) - mask_expected_empty = np.array([], dtype=np.bool_) - - pa_array_offset = pa.Array.from_buffers( - type=pa_array.type, - length=0, - buffers=[mask_buffer, data_buffer], - offset=pa_array.offset, - ) - pa_array_offset.validate() - data, mask = pyarrow_array_to_numpy_and_mask(pa_array_offset, np_dtype) - tm.assert_numpy_array_equal(data[:3], np_expected_empty) - tm.assert_numpy_array_equal(mask, mask_expected_empty) - - -@pytest.mark.parametrize( - "arr", [pa.nulls(10), pa.chunked_array([pa.nulls(4), pa.nulls(6)])] -) -def test_from_arrow_null(data, arr): - res = data.dtype.__from_arrow__(arr) - assert res.isna().all() - assert len(res) == 10 - - -def test_from_arrow_type_error(data): - # ensure that __from_arrow__ returns a TypeError when getting a wrong - # array type - - arr = pa.array(data).cast("string") - with pytest.raises(TypeError, match=None): - # we don't test the exact error message, only the fact that it raises - # a TypeError is relevant - data.dtype.__from_arrow__(arr) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/window/test_cython_aggregations.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/window/test_cython_aggregations.py deleted file mode 100644 index c60cb6ea74ec0aa90cf089841c853c657e1b4c00..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/window/test_cython_aggregations.py +++ /dev/null @@ -1,111 +0,0 @@ -from functools import partial -import sys - -import numpy as np -import pytest - -import pandas._libs.window.aggregations as window_aggregations - -from pandas import Series -import pandas._testing as tm - - -def _get_rolling_aggregations(): - # list pairs of name and function - # each function has this signature: - # (const float64_t[:] values, ndarray[int64_t] start, - # ndarray[int64_t] end, int64_t minp) -> np.ndarray - named_roll_aggs = ( - [ - ("roll_sum", window_aggregations.roll_sum), - ("roll_mean", window_aggregations.roll_mean), - ] - + [ - (f"roll_var({ddof})", partial(window_aggregations.roll_var, ddof=ddof)) - for ddof in [0, 1] - ] - + [ - ("roll_skew", window_aggregations.roll_skew), - ("roll_kurt", window_aggregations.roll_kurt), - ("roll_median_c", window_aggregations.roll_median_c), - ("roll_max", window_aggregations.roll_max), - ("roll_min", window_aggregations.roll_min), - ] - + [ - ( - f"roll_quantile({quantile},{interpolation})", - partial( - window_aggregations.roll_quantile, - quantile=quantile, - interpolation=interpolation, - ), - ) - for quantile in [0.0001, 0.5, 0.9999] - for interpolation in window_aggregations.interpolation_types - ] - + [ - ( - f"roll_rank({percentile},{method},{ascending})", - partial( - window_aggregations.roll_rank, - percentile=percentile, - method=method, - ascending=ascending, - ), - ) - for percentile in [True, False] - for method in window_aggregations.rolling_rank_tiebreakers.keys() - for ascending in [True, False] - ] - ) - # unzip to a list of 2 tuples, names and functions - unzipped = list(zip(*named_roll_aggs)) - return {"ids": unzipped[0], "params": unzipped[1]} - - -_rolling_aggregations = _get_rolling_aggregations() - - -@pytest.fixture( - params=_rolling_aggregations["params"], ids=_rolling_aggregations["ids"] -) -def rolling_aggregation(request): - """Make a rolling aggregation function as fixture.""" - return request.param - - -def test_rolling_aggregation_boundary_consistency(rolling_aggregation): - # GH-45647 - minp, step, width, size, selection = 0, 1, 3, 11, [2, 7] - values = np.arange(1, 1 + size, dtype=np.float64) - end = np.arange(width, size, step, dtype=np.int64) - start = end - width - selarr = np.array(selection, dtype=np.int32) - result = Series(rolling_aggregation(values, start[selarr], end[selarr], minp)) - expected = Series(rolling_aggregation(values, start, end, minp)[selarr]) - tm.assert_equal(expected, result) - - -def test_rolling_aggregation_with_unused_elements(rolling_aggregation): - # GH-45647 - minp, width = 0, 5 # width at least 4 for kurt - size = 2 * width + 5 - values = np.arange(1, size + 1, dtype=np.float64) - values[width : width + 2] = sys.float_info.min - values[width + 2] = np.nan - values[width + 3 : width + 5] = sys.float_info.max - start = np.array([0, size - width], dtype=np.int64) - end = np.array([width, size], dtype=np.int64) - loc = np.array( - [j for i in range(len(start)) for j in range(start[i], end[i])], - dtype=np.int32, - ) - result = Series(rolling_aggregation(values, start, end, minp)) - compact_values = np.array(values[loc], dtype=np.float64) - compact_start = np.arange(0, len(start) * width, width, dtype=np.int64) - compact_end = compact_start + width - expected = Series( - rolling_aggregation(compact_values, compact_start, compact_end, minp) - ) - assert np.isfinite(expected.values).all(), "Not all expected values are finite" - tm.assert_equal(expected, result) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/sniffio/_version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/sniffio/_version.py deleted file mode 100644 index 5a5f906bbf9194d624facc763022721a96a4a3b4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/sniffio/_version.py +++ /dev/null @@ -1,3 +0,0 @@ -# This file is imported from __init__.py and exec'd from setup.py - -__version__ = "1.3.0" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/curried/operator.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/curried/operator.py deleted file mode 100644 index 35979a6851d55ef01a279fb56598c6e58a975376..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/curried/operator.py +++ /dev/null @@ -1,22 +0,0 @@ -from __future__ import absolute_import - -import operator - -from toolz.functoolz import curry - - -# Tests will catch if/when this needs updated -IGNORE = { - "__abs__", "__index__", "__inv__", "__invert__", "__neg__", "__not__", - "__pos__", "_abs", "abs", "attrgetter", "index", "inv", "invert", - "itemgetter", "neg", "not_", "pos", "truth" -} -locals().update( - {name: f if name in IGNORE else curry(f) - for name, f in vars(operator).items() if callable(f)} -) - -# Clean up the namespace. -del IGNORE -del curry -del operator diff --git a/spaces/quidiaMuxgu/Expedit-SAM/HWK Support Suite Setup V02.02.003 FREE DOWNLOAD _HOT_.md b/spaces/quidiaMuxgu/Expedit-SAM/HWK Support Suite Setup V02.02.003 FREE DOWNLOAD _HOT_.md deleted file mode 100644 index f8dae1cfbd5b1a395aaa303a0bd7664989393d1c..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/HWK Support Suite Setup V02.02.003 FREE DOWNLOAD _HOT_.md +++ /dev/null @@ -1,6 +0,0 @@ -

    HWK Support Suite Setup v02.02.003 FREE DOWNLOAD


    Download Ziphttps://geags.com/2uCt2r



    -
    -Liked. 0:25 .... Viljamas Sekspyras Hamletas Pdf 133 > DOWNLOAD. a363e5b4ee Hamletas pdf ... HWK Support Suite Setup v02.02.003 FREE DOWNLOAD 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/rachana219/MODT2/trackers/strongsort/sort/__init__.py b/spaces/rachana219/MODT2/trackers/strongsort/sort/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Acrobat Dc 2015 Crack HOT.md b/spaces/raedeXanto/academic-chatgpt-beta/Adobe Acrobat Dc 2015 Crack HOT.md deleted file mode 100644 index 518a53675168b4e2d92489547a9e215f24fe2907..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Acrobat Dc 2015 Crack HOT.md +++ /dev/null @@ -1,17 +0,0 @@ - -

    How to Crack Adobe Acrobat DC 2015 and Why You Shouldn't Do It

    -

    Adobe Acrobat DC 2015 is one of the most popular and powerful PDF editors in the market. It allows you to create, edit, convert, sign, share, and protect PDF files with ease. However, it also comes with a hefty price tag that may not be affordable for everyone. That's why some people resort to using a crack to get the full version of Adobe Acrobat DC 2015 without paying anything.

    -

    Adobe Acrobat Dc 2015 Crack


    DOWNLOAD ••• https://tinourl.com/2uL41B



    -

    But is cracking Adobe Acrobat DC 2015 worth it? What are the risks and consequences of using a cracked version? How can you get the official version of Adobe Acrobat DC 2015 legally and ethically? In this article, we will answer these questions and show you how to crack Adobe Acrobat DC 2015 and why you shouldn't do it.

    -

    What is Adobe Acrobat DC 2015?

    -

    Adobe Acrobat DC 2015 is the latest version of the Acrobat PDF editor that was released by Adobe Systems in April 2015. It is part of the Document Cloud service that allows users to access and share their PDF files across different devices and platforms. It has two editions: Acrobat Pro DC and Acrobat Standard DC.

    -

    Features and benefits of Adobe Acrobat DC 2015

    -

    Adobe Acrobat DC 2015 has many features and benefits that make it a powerful and versatile PDF tool. Some of them are:

    -
      -
    • Edit PDFs: You can edit text, images, links, headers, footers, watermarks, backgrounds, and more in your PDF files. You can also rearrange, delete, rotate, crop, split, merge, or extract pages from your PDF files.
    • -
    • Create PDFs: You can create PDF files from various sources, such as Microsoft Office files, images, web pages, scans, clipboard content, etc. You can also convert PDF files to other formats, such as Word, Excel, PowerPoint, HTML, etc.
    • -
    • Sign PDFs: You can sign your PDF files electronically with your digital signature or certificate. You can also fill out forms, collect signatures from others, track responses, and send reminders.
    • -
    • -

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Gaia synthesizer sound designer crack Experience the joy of GAIA synthesis on your Mac or PC.md b/spaces/raedeXanto/academic-chatgpt-beta/Gaia synthesizer sound designer crack Experience the joy of GAIA synthesis on your Mac or PC.md deleted file mode 100644 index 9f207fa7c873201f2c0f71e148b0f0074df29350..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Gaia synthesizer sound designer crack Experience the joy of GAIA synthesis on your Mac or PC.md +++ /dev/null @@ -1,115 +0,0 @@ - -

      Gaia Synthesizer Sound Designer Crack: What You Need to Know

      -

      If you own a Roland Gaia SH-01 synthesizer, you might have heard of Gaia Synthesizer Sound Designer, a software editor that allows you to access and edit all the parameters of the synth on your computer. You might also have seen some websites or forums offering a cracked version of this software for free or for a very low price. But before you download and install such a crack, you should know what you are getting into and what are the potential consequences. In this article, we will explain what Gaia Synthesizer Sound Designer is, why some people want to crack it, what are the risks and drawbacks of using a cracked version, and how you can get Gaia Synthesizer Sound Designer legally and safely.

      -

      Introduction

      -

      What is Gaia Synthesizer Sound Designer?

      -

      Gaia Synthesizer Sound Designer is a software-based editor application for the Roland Gaia SH-01 synthesizer. Compatible with both Windows and Mac OS X, it allows users to access the Gaia synth's entire set of sound parameters visually via a computer. The software has a friendly user interface that makes creating Gaia sounds fast and easy. It also has a unique waveform display that shows the shape of the currently edited waveform in a virtual oscilloscope, providing a graphical representation of the sound. Gaia Synthesizer Sound Designer is the perfect companion for all Gaia synth owners—beginners, music educators, and pros alike.

      -

      gaia synthesizer sound designer crack


      Download Zip ✓✓✓ https://tinourl.com/2uL2Za



      -

      Why do some people want to crack it?

      -

      Gaia Synthesizer Sound Designer is not a free software. It is sold by Roland as an optional accessory for the Gaia SH-01 synthesizer. The original price of the software was $99 USD, but it has been discontinued by Roland and is now hard to find in online stores. Some online vendors are selling it for upwards of $200 USD, which is both insane and laughable. Some people who own a Gaia SH-01 synthesizer but do not want to pay such a high price for the software might be tempted to look for a cracked version online. A cracked version is a modified version of the software that bypasses the copy protection or activation mechanism, allowing users to use it without paying for it or registering it.

      -

      What are the risks and drawbacks of using a cracked version?

      -

      Using a cracked version of Gaia Synthesizer Sound Designer might seem like a good idea at first, but it comes with many risks and drawbacks that outweigh any possible benefits. Here are some of them:

      -
        -
      • It is illegal. Cracking software is a form of software piracy, which is a violation of intellectual property rights and copyright laws. By downloading and using a cracked version of Gaia Synthesizer Sound Designer, you are breaking the law and exposing yourself to potential legal actions from Roland or other authorities.
      • -
      • It is unethical. Cracking software is also a form of stealing, which is morally wrong and unfair to the developers who spent time and money creating the software. By using a cracked version of Gaia Synthesizer Sound Designer, you are depriving Roland of their rightful income and showing disrespect to their work.
      • -
      • It is unsafe. Cracked software often comes from dubious sources that might contain malware, viruses, spyware, or other harmful programs that can damage your computer or compromise your personal information. By downloading and installing a cracked version of Gaia Synthesizer Sound Designer, you are putting your computer and your data at risk.
      • -
      • It is unreliable. Cracked software often has bugs, errors, glitches, or compatibility issues that can affect its performance or functionality. By using a cracked version of Gaia Synthesizer Sound Designer, you might experience crashes, freezes, corrupted files, lost data, or other problems that can ruin your work or your enjoyment.
      • -
      • It is unsupported. Cracked software does not receive any updates, patches, fixes, or improvements from the developers. By using a cracked version of Gaia Synthesizer Sound Designer, you are missing out on any new features or enhancements that Roland might release for the software.
      • -
      -

      How to get Gaia Synthesizer Sound Designer legally and safely

      -

      If you want to use Gaia Synthesizer Sound Designer with your Gaia SH-01 synthesizer, there are better ways than resorting to cracking it. Here are some options that you can consider:

      -

      Buy it from Roland or authorized dealers

      -

      The best way to get Gaia Synthesizer Sound Designer legally and safely is to buy it from Roland or authorized dealers. This way, you will get an original copy of the software that is fully functional, secure, reliable, and supported by Roland. You will also get a license key that will allow you to activate and register the software on your computer. You will also be supporting Roland's business and encouraging them to create more products for their customers.

      -

      The downside of this option is that it might be difficult or expensive to find a copy of Gaia Synthesizer Sound Designer in online stores nowadays since it has been discontinued by Roland. You might have to search hard or wait long for one to become available. You might also have to pay more than the original price if there is high demand or low supply for the software.

      -

      Use alternative free or low-cost editors

      -

      Another way to get Gaia Synthesizer Sound Designer legally and safely is to use alternative free or low-cost editors that are compatible with the Gaia SH-01 synthesizer. There are several third-party editors that have been created by independent developers or enthusiasts that offer similar or even better features than Gaia Synthesizer Sound Designer. Some examples are:

      -
        -
      • Gaia Tool: A free web-based editor that allows users to edit all parameters of the Gaia SH-01 synth via MIDI connection.
      • -
      • Roland GAIA SH-01 Sound Editor: A free Windows-based editor that allows users to edit all parameters of the Gaia SH-01 synth via USB connection.
      • -
      • Patch Base: A low-cost iOS-based editor that allows users to edit all parameters of the Gaia SH-01 synth via MIDI connection.
      • -
      -

      The upside of this option is that these editors are easy to find and download online. They are also cheaper or even free compared to Gaia Synthesizer Sound Designer. They also offer similar or even better features than Gaia Synthesizer Sound Designer.

      -

      gaia synth sound design software cracked
      -how to download gaia synthesizer sound designer for free
      -gaia sound designer crack mac
      -gaia synthesizer sound design tutorial
      -gaia synth sound designer license key
      -best gaia synthesizer sound presets
      -gaia sound designer crack windows
      -gaia synthesizer sound design course
      -gaia synth sound designer activation code
      -how to use gaia synthesizer sound designer
      -gaia sound designer crack reddit
      -gaia synthesizer sound design tips
      -gaia synth sound designer serial number
      -where to buy gaia synthesizer sound designer
      -gaia sound designer crack torrent
      -gaia synthesizer sound design examples
      -gaia synth sound designer registration key
      -how to install gaia synthesizer sound designer
      -gaia sound designer crack 2023
      -gaia synthesizer sound design review
      -gaia synth sound designer keygen
      -how to uninstall gaia synthesizer sound designer
      -gaia sound designer crack mega.nz
      -gaia synthesizer sound design book
      -gaia synth sound designer patch
      -how to update gaia synthesizer sound designer
      -gaia sound designer crack zippyshare
      -gaia synthesizer sound design podcast
      -gaia synth sound designer full version
      -how to backup gaia synthesizer sound designer
      -gaia sound designer crack mediafire
      -gaia synthesizer sound design blog
      -gaia synth sound designer trial version
      -how to restore gaia synthesizer sound designer
      -gaia sound designer crack 4shared
      -gaia synthesizer sound design youtube channel
      -gaia synth sound designer online version
      -how to transfer gaia synthesizer sound designer
      -gaia sound designer crack dropbox
      -gaia synthesizer sound design forum
      -gaia synth sound designer web version
      -how to export gaia synthesizer sound presets
      -gaia sound designer crack google drive
      -gaia synthesizer sound design magazine
      -gaia synth sound designer mobile version
      -how to import gaia synthesizer sound presets
      -gaia sound designer crack one drive
      -gaia synthesizer sound design newsletter
      -gaia synth sound designer desktop version
      -how to share gaia synthesizer sound presets

      -

      The downside of this option is that these editors are not official products from Roland and might not be as stable, compatible, or supported as Gaia Synthesizer Sound Designer. They might also have different user interfaces or workflows than Gaia Synthesizer Sound Designer.

      -

      Learn to program the Gaia SH-01 synth without an editor

      -

      The final way to get Gaia Synthesizer Sound Designer legally and safely is to learn how to program the Gaia SH-01 synth without an editor at all. The Gaia SH-01 synth has a very intuitive and hands-on interface that allows users to tweak all parameters directly on the hardware panel using knobs, sliders, buttons, switches, and pedals. The synth also has an LCD screen that displays information about the current settings and values. The synth also has 64 preset patches and 64 user patches that can be stored and recalled easily using buttons on the panel.

      -

      The upside of this option is that it does not require any additional software or hardware apart from the synth itself. It also allows users to develop their skills and creativity in sound design by experimenting with different parameters and combinations on the fly. It also gives users more control and flexibility over their sounds than relying on an editor.

      -

      Conclusion

      -

      Summary of the main points

      -

      In conclusion, Gaia Synthesizer Sound Designer is a software editor for the Roland Gaia SH-01 synthesizer that allows users to access and edit all the parameters of the synth on their computer. However, it is not a free software and it has been discontinued by Roland, making it hard or expensive to find in online stores. Some people might be tempted to use a cracked version of the software, but this is illegal, unethical, unsafe, unreliable, and unsupported. There are better ways to get Gaia Synthesizer Sound Designer legally and safely, such as buying it from Roland or authorized dealers, using alternative free or low-cost editors, or learning to program the Gaia SH-01 synth without an editor.

      -

      Call to action and recommendations

      -

      If you are a Gaia SH-01 synth owner and you want to use Gaia Synthesizer Sound Designer, we recommend that you avoid cracking it and choose one of the options that we have suggested. This way, you will be able to enjoy your synth and your software without breaking the law, harming your computer, or missing out on any updates or improvements. You will also be able to create amazing sounds with your synth using your own skills and creativity.

      -

      If you want to learn more about Gaia Synthesizer Sound Designer or the Gaia SH-01 synth, you can check out some of these resources:

      -
        -
      • Roland GAIA SH-01 Product Page: The official website of the Gaia SH-01 synthesizer where you can find product information, specifications, features, videos, downloads, and support.
      • -
      • Roland GAIA SH-01 Owner's Manual: The official user manual of the Gaia SH-01 synthesizer where you can find detailed instructions on how to use the synth and its functions.
      • -
      • Roland GAIA SH-01 Sound Designer Manual: The official user manual of the Gaia Synthesizer Sound Designer software where you can find detailed instructions on how to use the software and its functions.
      • -
      • Best Roland Gaia Tutorials SH-01 Synthesizer: A YouTube playlist that contains some of the best tutorials and demos for the Gaia SH-01 synthesizer and Gaia Synthesizer Sound Designer software.
      • -
      • GAIA SH-01: Synth Basics Fundamentals: A series of videos that give an easy step-by-step introduction to synth programming using the Gaia SH-01 synthesizer.
      • -
      -

      We hope that this article has been helpful and informative for you. Thank you for reading and happy synthesizing!

      -

      FAQs

      -
        -
      1. What is Gaia Synthesizer Sound Designer?
      2. -

        Gaia Synthesizer Sound Designer is a software-based editor application for the Roland Gaia SH-01 synthesizer that allows users to access and edit all the parameters of the synth on their computer.

        -
      3. Why do some people want to crack it?
      4. -

        Some people want to crack it because it is not a free software and it has been discontinued by Roland, making it hard or expensive to find in online stores.

        -
      5. What are the risks and drawbacks of using a cracked version?
      6. -

        The risks and drawbacks of using a cracked version are that it is illegal, unethical, unsafe, unreliable, and unsupported.

        -
      7. How to get Gaia Synthesizer Sound Designer legally and safely?
      8. -

        The ways to get Gaia Synthesizer Sound Designer legally and safely are to buy it from Roland or authorized dealers, use alternative free or low-cost editors, or learn to program the Gaia SH-01 synth without an editor.

        -
      9. Where can I learn more about Gaia Synthesizer Sound Designer or the Gaia SH-01 synth?
      10. -

        You can learn more about Gaia Synthesizer Sound Designer or the Gaia SH-01 synth by visiting their official websites, reading their user manuals, watching their tutorials and demos on YouTube, or following their social media accounts.

        -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/rajaatif786/VirBert2/Toxonomy/modules/classifier.py b/spaces/rajaatif786/VirBert2/Toxonomy/modules/classifier.py deleted file mode 100644 index 151769cd8c751cc1e8a777fe19988aa622325c73..0000000000000000000000000000000000000000 --- a/spaces/rajaatif786/VirBert2/Toxonomy/modules/classifier.py +++ /dev/null @@ -1,526 +0,0 @@ -# Create the BertClassfier class -import numpy as np -import torch -import torch.nn as nn -from transformers import AdamW, get_linear_schedule_with_warmup -device = 0 - -import random -import time -import torch.nn as nn - -# Specify loss function -loss_fn = nn.CrossEntropyLoss() - -class PretrainedBert(nn.Module): - """Bert Model for Classification Tasks. - """ - def __init__(self, freeze_bert=False): - """ - @param bert: a BertModel object - @param classifier: a torch.nn.Module classifier - @param freeze_bert (bool): Set `False` to fine-tune the BERT model - """ - super(PretrainedBert, self).__init__() - # Specify hidden size of BERT, hidden size of our classifier, and number of labels - D_in, H, D_out = 768, 50, 14 - # Instantiate BERT model - from transformers import BertConfig - - config = BertConfig( - # we align this to the tokenizer vocab_size - max_position_embeddings=5000, - hidden_size=768, - num_attention_heads=2, - num_hidden_layers=2, - type_vocab_size=1 -) - from transformers import BertForMaskedLM - - self.bert =BertModel(config) - # Instantiate an one-layer feed-forward classifier - self.classifier = nn.Sequential( - nn.Linear(D_in, H), - nn.ReLU(), - #nn.Dropout(0.5), - nn.Linear(H, D_out) - ) - - # Freeze the BERT model - if freeze_bert: - for param in self.bert.parameters(): - param.requires_grad = False - - def forward(self, input_ids, attention_mask): - """ - Feed input to BERT and the classifier to compute logits. - @param input_ids (torch.Tensor): an input tensor with shape (batch_size, - max_length) - @param attention_mask (torch.Tensor): a tensor that hold attention mask - information with shape (batch_size, max_length) - @return logits (torch.Tensor): an output tensor with shape (batch_size, - num_labels) - """ - # Feed input to BERT - outputs = self.bert(input_ids=input_ids, - attention_mask=attention_mask) - - # Extract the last hidden state of the token `[CLS]` for classification task - last_hidden_state_cls = outputs[0][:, 0, :] - - # Feed input to classifier to compute logits - logits = self.classifier(last_hidden_state_cls) - - return logits -from transformers import AdamW, get_linear_schedule_with_warmup -#device='cuda' - - -def valid_evaluate(model, val_dataloader): - """After the completion of each training epoch, measure the model's performance - on our validation set. - """ - # Put the model into the evaluation mode. The dropout layers are disabled during - # the test time. - model.eval() - - # Tracking variables - val_accuracy = [] - val_loss = [] - - # For each batch in our validation set... - for batch in val_dataloader: - # Load batch to GPU - b_input_ids, b_attn_mask, b_labels = tuple(t.to(device) for t in batch) - - # Compute logits - with torch.no_grad(): - logits = model(b_input_ids, b_attn_mask) - - # Compute loss - loss = loss_fn(logits, b_labels) - val_loss.append(loss.item()) - - # Get the predictions - preds = torch.argmax(logits, dim=1).flatten() - - # Calculate the accuracy rate - accuracy = (preds == b_labels).cpu().numpy().mean() * 100 - val_accuracy.append(accuracy) - - # Compute the average accuracy and loss over the validation set. - val_loss = np.mean(val_loss) - val_accuracy = np.mean(val_accuracy) - - return val_loss, val_accuracy - - - -import torch -import torch.nn as nn -from transformers import BertModel - -# Create the BertClassfier class -class FinetunningBert(nn.Module): - """Bert Model for Classification Tasks. - """ - def __init__(self,virus_dir, freeze_bert=False): - """ - @param bert: a BertModel object - @param classifier: a torch.nn.Module classifier - @param freeze_bert (bool): Set `False` to fine-tune the BERT model - """ - super(FinetunningBert, self).__init__() - # Specify hidden size of BERT, hidden size of our classifier, and number of labels - D_in, H, D_out = 768, 50, 2 - # Instantiate BERT model - from transformers import BertConfig - - from transformers import BertForMaskedLM - bert_classifier = PretrainedBert(freeze_bert=False) - bert_classifier.load_state_dict(torch.load(virus_dir+'/virBERT.pt')) - self.bert =bert_classifier.bert.to(device) - # Instantiate an one-layer feed-forward classifier - self.classifier = nn.Sequential( - nn.Linear(D_in, H), - nn.ReLU(), - #nn.Dropout(0.5), - nn.Linear(H, D_out) - ) - - # Freeze the BERT model - if freeze_bert: - for param in self.bert.parameters(): - param.requires_grad = False - - def forward(self, input_ids, attention_mask): - """ - Feed input to BERT and the classifier to compute logits. - @param input_ids (torch.Tensor): an input tensor with shape (batch_size, - max_length) - @param attention_mask (torch.Tensor): a tensor that hold attention mask - information with shape (batch_size, max_length) - @return logits (torch.Tensor): an output tensor with shape (batch_size, - num_labels) - """ - # Feed input to BERT - outputs = self.bert(input_ids=input_ids, - attention_mask=attention_mask) - - # Extract the last hidden state of the token `[CLS]` for classification task - last_hidden_state_cls = outputs[0][:, 0, :] - - # Feed input to classifier to compute logits - logits = self.classifier(last_hidden_state_cls) - - return logits -from transformers import AdamW, get_linear_schedule_with_warmup -#device='cuda' -def initialize_finetunningBert(train_dataloader,virus_dir,epochs=4): - """Initialize the Bert Classifier, the optimizer and the learning rate scheduler. - """ - # Instantiate Bert Classifier - bert_classifier = FinetunningBert(virus_dir,freeze_bert=False) - - # Tell PyTorch to run the model on GPU - bert_classifier.to(device) - - # Create the optimizer - optimizer = AdamW(bert_classifier.parameters(), - lr=5e-5, # Default learning rate - eps=1e-8 # Default epsilon value - ) - - # Total number of training steps - total_steps = len(train_dataloader) * epochs - - # Set up the learning rate scheduler - scheduler = get_linear_schedule_with_warmup(optimizer, - num_warmup_steps=0, # Default value - num_training_steps=total_steps) - return bert_classifier, optimizer, scheduler -import random -import time -import torch.nn as nn - -# Specify loss function -loss_fn = nn.CrossEntropyLoss() - - -def finetunningBert_training(model, optimizer, scheduler, train_dataloader, val_dataloader=None, epochs=4, evaluation=False): - """Train the BertClassifier model. - """ - # Start training loop - print("Start training...\n") - for epoch_i in range(epochs): - # ======================================= - # Training - # ======================================= - # Print the header of the result table - print(f"{'Epoch':^7} | {'Batch':^7} | {'Train Loss':^12} | {'Val Loss':^10} | {'Val Acc':^9} | {'Elapsed':^9}") - print("-"*70) - - # Measure the elapsed time of each epoch - t0_epoch, t0_batch = time.time(), time.time() - - # Reset tracking variables at the beginning of each epoch - total_loss, batch_loss, batch_counts = 0, 0, 0 - - # Put the model into the training mode - model.train() - - # For each batch of training data... - for step, batch in enumerate(train_dataloader): - batch_counts +=1 - # Load batch to GPU - b_input_ids, b_attn_mask, b_labels = tuple(t.to(device) for t in batch) - - # Zero out any previously calculated gradients - model.zero_grad() - - # Perform a forward pass. This will return logits. - logits = model(b_input_ids, b_attn_mask) - - # Compute loss and accumulate the loss values - loss = loss_fn(logits, b_labels) - batch_loss += loss.item() - total_loss += loss.item() - - # Perform a backward pass to calculate gradients - loss.backward() - - # Clip the norm of the gradients to 1.0 to prevent "exploding gradients" - torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) - - # Update parameters and the learning rate - optimizer.step() - scheduler.step() - - # Print the loss values and time elapsed for every 20 batches - if (step % 20 == 0 and step != 0) or (step == len(train_dataloader) - 1): - # Calculate time elapsed for 20 batches - time_elapsed = time.time() - t0_batch - - # Print training results - print(f"{epoch_i + 1:^7} | {step:^7} | {batch_loss / batch_counts:^12.6f} | {'-':^10} | {'-':^9} | {time_elapsed:^9.2f}") - - # Reset batch tracking variables - batch_loss, batch_counts = 0, 0 - t0_batch = time.time() - - # Calculate the average loss over the entire training data - avg_train_loss = total_loss / len(train_dataloader) - torch.save(model.state_dict(), '{}model.pt'.format("VirDNA")) - print("-"*70) - # ======================================= - # Evaluation - # ======================================= - if evaluation == True: - # After the completion of each training epoch, measure the model's performance - # on our validation set. - val_loss, val_accuracy = valid_evaluate(model, val_dataloader) - - # Print performance over the entire training data - time_elapsed = time.time() - t0_epoch - - print(f"{epoch_i + 1:^7} | {'-':^7} | {avg_train_loss:^12.6f} | {val_loss:^10.6f} | {val_accuracy:^9.2f} | {time_elapsed:^9.2f}") - print("-"*70) - print("\n") - - print("Training complete!") - -def bertPredictions(torch,model, val_dataloader): - """After the completion of each training epoch, measure the model's performance - on our validation set. - """ - # Put the model into the evaluation mode. The dropout layers are disabled during - # the test time. - model.eval() - device = 0 - print("working3") - - # Tracking variables - val_accuracy = [] - val_loss = [] - pred=[] - actual=[] - # For each batch in our validation set... - for batch in val_dataloader: - device = 0 - # Load batch to GPU - b_input_ids, b_attn_mask, b_labels = tuple(t for t in batch) - - # Compute logits - with torch.no_grad(): - logits = model(b_input_ids, b_attn_mask) - - # Compute loss - #loss = loss_fn(logits, b_labels) - #val_loss.append(loss.item()) - - # Get the predictions - preds = torch.argmax(logits, dim=1).flatten() - - # Calculate the accuracy rate - #accuracy = (preds == b_labels).cpu().numpy().mean() * 100 - #val_accuracy.append(accuracy) - pred.append(preds.cpu()) - #actual.append(b_labels.cpu()) - - # Compute the average accuracy and loss over the validation set. - #val_loss = np.mean(val_loss) - #val_accuracy = np.mean(val_accuracy) - - return pred - - - - - - - - - - - - -import torch -import torch.nn as nn -from transformers import BertModel - -# Create the BertClassfier class -class ScratchBert(nn.Module): - """Bert Model for Classification Tasks. - """ - def __init__(self, freeze_bert=False): - """ - @param bert: a BertModel object - @param classifier: a torch.nn.Module classifier - @param freeze_bert (bool): Set `False` to fine-tune the BERT model - """ - super(ScratchBert, self).__init__() - # Specify hidden size of BERT, hidden size of our classifier, and number of labels - D_in, H, D_out = 768, 50, 2 - # Instantiate BERT model - from transformers import BertConfig - - - config = BertConfig( - # we align this to the tokenizer vocab_size - max_position_embeddings=5000, - hidden_size=768, - num_attention_heads=2, - num_hidden_layers=2, - type_vocab_size=1 -) - from transformers import BertForMaskedLM - - self.bert =BertModel(config) - # Instantiate an one-layer feed-forward classifier - self.classifier = nn.Sequential( - nn.Linear(D_in, H), - nn.ReLU(), - #nn.Dropout(0.5), - nn.Linear(H, D_out) - ) - - # Freeze the BERT model - if freeze_bert: - for param in self.bert.parameters(): - param.requires_grad = False - - def forward(self, input_ids, attention_mask): - """ - Feed input to BERT and the classifier to compute logits. - @param input_ids (torch.Tensor): an input tensor with shape (batch_size, - max_length) - @param attention_mask (torch.Tensor): a tensor that hold attention mask - information with shape (batch_size, max_length) - @return logits (torch.Tensor): an output tensor with shape (batch_size, - num_labels) - """ - # Feed input to BERT - outputs = self.bert(input_ids=input_ids, - attention_mask=attention_mask) - - # Extract the last hidden state of the token `[CLS]` for classification task - last_hidden_state_cls = outputs[0][:, 0, :] - - # Feed input to classifier to compute logits - logits = self.classifier(last_hidden_state_cls) - - return logits -from transformers import AdamW, get_linear_schedule_with_warmup -#device='cuda' -def initialize_model(train_dataloader,epochs=4): - """Initialize the Bert Classifier, the optimizer and the learning rate scheduler. - """ - # Instantiate Bert Classifier - bert_classifier = ScratchBert(freeze_bert=False) - - # Tell PyTorch to run the model on GPU - bert_classifier.to(device) - - # Create the optimizer - optimizer = AdamW(bert_classifier.parameters(), - lr=5e-5, # Default learning rate - eps=1e-8 # Default epsilon value - ) - - # Total number of training steps - total_steps = len(train_dataloader) * epochs - - # Set up the learning rate scheduler - scheduler = get_linear_schedule_with_warmup(optimizer, - num_warmup_steps=0, # Default value - num_training_steps=total_steps) - return bert_classifier, optimizer, scheduler -import random -import time -import torch.nn as nn - -# Specify loss function -loss_fn = nn.CrossEntropyLoss() - - -def train(model,optimizer, scheduler, train_dataloader, val_dataloader=None, epochs=4, evaluation=False): - """Train the BertClassifier model. - """ - # Start training loop - print("Start training...\n") - for epoch_i in range(epochs): - # ======================================= - # Training - # ======================================= - # Print the header of the result table - print(f"{'Epoch':^7} | {'Batch':^7} | {'Train Loss':^12} | {'Val Loss':^10} | {'Val Acc':^9} | {'Elapsed':^9}") - print("-"*70) - - # Measure the elapsed time of each epoch - t0_epoch, t0_batch = time.time(), time.time() - - # Reset tracking variables at the beginning of each epoch - total_loss, batch_loss, batch_counts = 0, 0, 0 - - # Put the model into the training mode - model.train() - - # For each batch of training data... - for step, batch in enumerate(train_dataloader): - batch_counts +=1 - # Load batch to GPU - b_input_ids, b_attn_mask, b_labels = tuple(t.to(device) for t in batch) - - # Zero out any previously calculated gradients - model.zero_grad() - - # Perform a forward pass. This will return logits. - logits = model(b_input_ids, b_attn_mask) - - # Compute loss and accumulate the loss values - loss = loss_fn(logits, b_labels) - batch_loss += loss.item() - total_loss += loss.item() - - # Perform a backward pass to calculate gradients - loss.backward() - - # Clip the norm of the gradients to 1.0 to prevent "exploding gradients" - torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) - - # Update parameters and the learning rate - optimizer.step() - scheduler.step() - - # Print the loss values and time elapsed for every 20 batches - if (step % 20 == 0 and step != 0) or (step == len(train_dataloader) - 1): - # Calculate time elapsed for 20 batches - time_elapsed = time.time() - t0_batch - - # Print training results - print(f"{epoch_i + 1:^7} | {step:^7} | {batch_loss / batch_counts:^12.6f} | {'-':^10} | {'-':^9} | {time_elapsed:^9.2f}") - - # Reset batch tracking variables - batch_loss, batch_counts = 0, 0 - t0_batch = time.time() - - # Calculate the average loss over the entire training data - avg_train_loss = total_loss / len(train_dataloader) - torch.save(model.state_dict(), '{}model.pt'.format("VirDNA")) - print("-"*70) - # ======================================= - # Evaluation - # ======================================= - if evaluation == True: - # After the completion of each training epoch, measure the model's performance - # on our validation set. - val_loss, val_accuracy = valid_evaluate(model, val_dataloader) - - # Print performance over the entire training data - time_elapsed = time.time() - t0_epoch - - print(f"{epoch_i + 1:^7} | {'-':^7} | {avg_train_loss:^12.6f} | {val_loss:^10.6f} | {val_accuracy:^9.2f} | {time_elapsed:^9.2f}") - print("-"*70) - print("\n") - - print("Training complete!") \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/README.md b/spaces/rayan-saleh/whisper2notion/README.md deleted file mode 100644 index 9486159d1c4d9d5dab4f480d421486780e10cba3..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/README.md +++ /dev/null @@ -1,149 +0,0 @@ ---- -title: Whisper Webui -emoji: ⚡ -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -# Running Locally - -To run this program locally, first install Python 3.9+ and Git. Then install Pytorch 10.1+ and all the other dependencies: -``` -pip install -r requirements.txt -``` - -You can find detailed instructions for how to install this on Windows 10/11 [here (PDF)](docs/windows/install_win10_win11.pdf). - -Finally, run the full version (no audio length restrictions) of the app with parallel CPU/GPU enabled: -``` -python app.py --input_audio_max_duration -1 --server_name 127.0.0.1 --auto_parallel True -``` - -You can also run the CLI interface, which is similar to Whisper's own CLI but also supports the following additional arguments: -``` -python cli.py \ -[--vad {none,silero-vad,silero-vad-skip-gaps,silero-vad-expand-into-gaps,periodic-vad}] \ -[--vad_merge_window VAD_MERGE_WINDOW] \ -[--vad_max_merge_size VAD_MAX_MERGE_SIZE] \ -[--vad_padding VAD_PADDING] \ -[--vad_prompt_window VAD_PROMPT_WINDOW] -[--vad_cpu_cores NUMBER_OF_CORES] -[--vad_parallel_devices COMMA_DELIMITED_DEVICES] -[--auto_parallel BOOLEAN] -``` -In addition, you may also use URL's in addition to file paths as input. -``` -python cli.py --model large --vad silero-vad --language Japanese "https://www.youtube.com/watch?v=4cICErqqRSM" -``` - -## Google Colab - -You can also run this Web UI directly on [Google Colab](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing), if you haven't got a GPU powerful enough to run the larger models. - -See the [colab documentation](docs/colab.md) for more information. - -## Parallel Execution - -You can also run both the Web-UI or the CLI on multiple GPUs in parallel, using the `vad_parallel_devices` option. This takes a comma-delimited list of -device IDs (0, 1, etc.) that Whisper should be distributed to and run on concurrently: -``` -python cli.py --model large --vad silero-vad --language Japanese \ ---vad_parallel_devices 0,1 "https://www.youtube.com/watch?v=4cICErqqRSM" -``` - -Note that this requires a VAD to function properly, otherwise only the first GPU will be used. Though you could use `period-vad` to avoid taking the hit -of running Silero-Vad, at a slight cost to accuracy. - -This is achieved by creating N child processes (where N is the number of selected devices), where Whisper is run concurrently. In `app.py`, you can also -set the `vad_process_timeout` option. This configures the number of seconds until a process is killed due to inactivity, freeing RAM and video memory. -The default value is 30 minutes. - -``` -python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600 -``` - -To execute the Silero VAD itself in parallel, use the `vad_cpu_cores` option: -``` -python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600 --vad_cpu_cores 4 -``` - -You may also use `vad_process_timeout` with a single device (`--vad_parallel_devices 0`), if you prefer to always free video memory after a period of time. - -### Auto Parallel - -You can also set `auto_parallel` to `True`. This will set `vad_parallel_devices` to use all the GPU devices on the system, and `vad_cpu_cores` to be equal to the number of -cores (up to 8): -``` -python app.py --input_audio_max_duration -1 --auto_parallel True -``` - -### Multiple Files - -You can upload multiple files either through the "Upload files" option, or as a playlist on YouTube. -Each audio file will then be processed in turn, and the resulting SRT/VTT/Transcript will be made available in the "Download" section. -When more than one file is processed, the UI will also generate a "All_Output" zip file containing all the text output files. - -# Docker - -To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU. -Then either use the GitLab hosted container below, or check out this repository and build an image: -``` -sudo docker build -t whisper-webui:1 . -``` - -You can then start the WebUI with GPU support like so: -``` -sudo docker run -d --gpus=all -p 7860:7860 whisper-webui:1 -``` - -Leave out "--gpus=all" if you don't have access to a GPU with enough memory, and are fine with running it on the CPU only: -``` -sudo docker run -d -p 7860:7860 whisper-webui:1 -``` - -# GitLab Docker Registry - -This Docker container is also hosted on GitLab: - -``` -sudo docker run -d --gpus=all -p 7860:7860 registry.gitlab.com/aadnk/whisper-webui:latest -``` - -## Custom Arguments - -You can also pass custom arguments to `app.py` in the Docker container, for instance to be able to use all the GPUs in parallel: -``` -sudo docker run -d --gpus all -p 7860:7860 \ ---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \ ---restart=on-failure:15 registry.gitlab.com/aadnk/whisper-webui:latest \ -app.py --input_audio_max_duration -1 --server_name 0.0.0.0 --auto_parallel True \ ---default_vad silero-vad --default_model_name large -``` - -You can also call `cli.py` the same way: -``` -sudo docker run --gpus all \ ---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \ ---mount type=bind,source=${PWD},target=/app/data \ -registry.gitlab.com/aadnk/whisper-webui:latest \ -cli.py --model large --auto_parallel True --vad silero-vad \ ---output_dir /app/data /app/data/YOUR-FILE-HERE.mp4 -``` - -## Caching - -Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand. -To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally) -prepopulate the directory with the different Whisper models. -``` -sudo docker run -d --gpus=all -p 7860:7860 \ ---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \ -registry.gitlab.com/aadnk/whisper-webui:latest -``` \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/zlib.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/zlib.d.ts deleted file mode 100644 index 1d7f0c0e507405e9584cd7158cbbea92234afa84..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/zlib.d.ts +++ /dev/null @@ -1,517 +0,0 @@ -/** - * The `zlib` module provides compression functionality implemented using Gzip, - * Deflate/Inflate, and Brotli. - * - * To access it: - * - * ```js - * const zlib = require('zlib'); - * ``` - * - * Compression and decompression are built around the Node.js `Streams API`. - * - * Compressing or decompressing a stream (such as a file) can be accomplished by - * piping the source stream through a `zlib` `Transform` stream into a destination - * stream: - * - * ```js - * const { createGzip } = require('zlib'); - * const { pipeline } = require('stream'); - * const { - * createReadStream, - * createWriteStream - * } = require('fs'); - * - * const gzip = createGzip(); - * const source = createReadStream('input.txt'); - * const destination = createWriteStream('input.txt.gz'); - * - * pipeline(source, gzip, destination, (err) => { - * if (err) { - * console.error('An error occurred:', err); - * process.exitCode = 1; - * } - * }); - * - * // Or, Promisified - * - * const { promisify } = require('util'); - * const pipe = promisify(pipeline); - * - * async function do_gzip(input, output) { - * const gzip = createGzip(); - * const source = createReadStream(input); - * const destination = createWriteStream(output); - * await pipe(source, gzip, destination); - * } - * - * do_gzip('input.txt', 'input.txt.gz') - * .catch((err) => { - * console.error('An error occurred:', err); - * process.exitCode = 1; - * }); - * ``` - * - * It is also possible to compress or decompress data in a single step: - * - * ```js - * const { deflate, unzip } = require('zlib'); - * - * const input = '.................................'; - * deflate(input, (err, buffer) => { - * if (err) { - * console.error('An error occurred:', err); - * process.exitCode = 1; - * } - * console.log(buffer.toString('base64')); - * }); - * - * const buffer = Buffer.from('eJzT0yMAAGTvBe8=', 'base64'); - * unzip(buffer, (err, buffer) => { - * if (err) { - * console.error('An error occurred:', err); - * process.exitCode = 1; - * } - * console.log(buffer.toString()); - * }); - * - * // Or, Promisified - * - * const { promisify } = require('util'); - * const do_unzip = promisify(unzip); - * - * do_unzip(buffer) - * .then((buf) => console.log(buf.toString())) - * .catch((err) => { - * console.error('An error occurred:', err); - * process.exitCode = 1; - * }); - * ``` - * @since v0.5.8 - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/zlib.js) - */ -declare module 'zlib' { - import * as stream from 'node:stream'; - interface ZlibOptions { - /** - * @default constants.Z_NO_FLUSH - */ - flush?: number | undefined; - /** - * @default constants.Z_FINISH - */ - finishFlush?: number | undefined; - /** - * @default 16*1024 - */ - chunkSize?: number | undefined; - windowBits?: number | undefined; - level?: number | undefined; // compression only - memLevel?: number | undefined; // compression only - strategy?: number | undefined; // compression only - dictionary?: NodeJS.ArrayBufferView | ArrayBuffer | undefined; // deflate/inflate only, empty dictionary by default - info?: boolean | undefined; - maxOutputLength?: number | undefined; - } - interface BrotliOptions { - /** - * @default constants.BROTLI_OPERATION_PROCESS - */ - flush?: number | undefined; - /** - * @default constants.BROTLI_OPERATION_FINISH - */ - finishFlush?: number | undefined; - /** - * @default 16*1024 - */ - chunkSize?: number | undefined; - params?: - | { - /** - * Each key is a `constants.BROTLI_*` constant. - */ - [key: number]: boolean | number; - } - | undefined; - maxOutputLength?: number | undefined; - } - interface Zlib { - /** @deprecated Use bytesWritten instead. */ - readonly bytesRead: number; - readonly bytesWritten: number; - shell?: boolean | string | undefined; - close(callback?: () => void): void; - flush(kind?: number, callback?: () => void): void; - flush(callback?: () => void): void; - } - interface ZlibParams { - params(level: number, strategy: number, callback: () => void): void; - } - interface ZlibReset { - reset(): void; - } - interface BrotliCompress extends stream.Transform, Zlib {} - interface BrotliDecompress extends stream.Transform, Zlib {} - interface Gzip extends stream.Transform, Zlib {} - interface Gunzip extends stream.Transform, Zlib {} - interface Deflate extends stream.Transform, Zlib, ZlibReset, ZlibParams {} - interface Inflate extends stream.Transform, Zlib, ZlibReset {} - interface DeflateRaw extends stream.Transform, Zlib, ZlibReset, ZlibParams {} - interface InflateRaw extends stream.Transform, Zlib, ZlibReset {} - interface Unzip extends stream.Transform, Zlib {} - /** - * Creates and returns a new `BrotliCompress` object. - * @since v11.7.0, v10.16.0 - */ - function createBrotliCompress(options?: BrotliOptions): BrotliCompress; - /** - * Creates and returns a new `BrotliDecompress` object. - * @since v11.7.0, v10.16.0 - */ - function createBrotliDecompress(options?: BrotliOptions): BrotliDecompress; - /** - * Creates and returns a new `Gzip` object. - * See `example`. - * @since v0.5.8 - */ - function createGzip(options?: ZlibOptions): Gzip; - /** - * Creates and returns a new `Gunzip` object. - * @since v0.5.8 - */ - function createGunzip(options?: ZlibOptions): Gunzip; - /** - * Creates and returns a new `Deflate` object. - * @since v0.5.8 - */ - function createDeflate(options?: ZlibOptions): Deflate; - /** - * Creates and returns a new `Inflate` object. - * @since v0.5.8 - */ - function createInflate(options?: ZlibOptions): Inflate; - /** - * Creates and returns a new `DeflateRaw` object. - * - * An upgrade of zlib from 1.2.8 to 1.2.11 changed behavior when `windowBits`is set to 8 for raw deflate streams. zlib would automatically set `windowBits`to 9 if was initially set to 8\. Newer - * versions of zlib will throw an exception, - * so Node.js restored the original behavior of upgrading a value of 8 to 9, - * since passing `windowBits = 9` to zlib actually results in a compressed stream - * that effectively uses an 8-bit window only. - * @since v0.5.8 - */ - function createDeflateRaw(options?: ZlibOptions): DeflateRaw; - /** - * Creates and returns a new `InflateRaw` object. - * @since v0.5.8 - */ - function createInflateRaw(options?: ZlibOptions): InflateRaw; - /** - * Creates and returns a new `Unzip` object. - * @since v0.5.8 - */ - function createUnzip(options?: ZlibOptions): Unzip; - type InputType = string | ArrayBuffer | NodeJS.ArrayBufferView; - type CompressCallback = (error: Error | null, result: Buffer) => void; - /** - * @since v11.7.0, v10.16.0 - */ - function brotliCompress(buf: InputType, options: BrotliOptions, callback: CompressCallback): void; - function brotliCompress(buf: InputType, callback: CompressCallback): void; - namespace brotliCompress { - function __promisify__(buffer: InputType, options?: BrotliOptions): Promise; - } - /** - * Compress a chunk of data with `BrotliCompress`. - * @since v11.7.0, v10.16.0 - */ - function brotliCompressSync(buf: InputType, options?: BrotliOptions): Buffer; - /** - * @since v11.7.0, v10.16.0 - */ - function brotliDecompress(buf: InputType, options: BrotliOptions, callback: CompressCallback): void; - function brotliDecompress(buf: InputType, callback: CompressCallback): void; - namespace brotliDecompress { - function __promisify__(buffer: InputType, options?: BrotliOptions): Promise; - } - /** - * Decompress a chunk of data with `BrotliDecompress`. - * @since v11.7.0, v10.16.0 - */ - function brotliDecompressSync(buf: InputType, options?: BrotliOptions): Buffer; - /** - * @since v0.6.0 - */ - function deflate(buf: InputType, callback: CompressCallback): void; - function deflate(buf: InputType, options: ZlibOptions, callback: CompressCallback): void; - namespace deflate { - function __promisify__(buffer: InputType, options?: ZlibOptions): Promise; - } - /** - * Compress a chunk of data with `Deflate`. - * @since v0.11.12 - */ - function deflateSync(buf: InputType, options?: ZlibOptions): Buffer; - /** - * @since v0.6.0 - */ - function deflateRaw(buf: InputType, callback: CompressCallback): void; - function deflateRaw(buf: InputType, options: ZlibOptions, callback: CompressCallback): void; - namespace deflateRaw { - function __promisify__(buffer: InputType, options?: ZlibOptions): Promise; - } - /** - * Compress a chunk of data with `DeflateRaw`. - * @since v0.11.12 - */ - function deflateRawSync(buf: InputType, options?: ZlibOptions): Buffer; - /** - * @since v0.6.0 - */ - function gzip(buf: InputType, callback: CompressCallback): void; - function gzip(buf: InputType, options: ZlibOptions, callback: CompressCallback): void; - namespace gzip { - function __promisify__(buffer: InputType, options?: ZlibOptions): Promise; - } - /** - * Compress a chunk of data with `Gzip`. - * @since v0.11.12 - */ - function gzipSync(buf: InputType, options?: ZlibOptions): Buffer; - /** - * @since v0.6.0 - */ - function gunzip(buf: InputType, callback: CompressCallback): void; - function gunzip(buf: InputType, options: ZlibOptions, callback: CompressCallback): void; - namespace gunzip { - function __promisify__(buffer: InputType, options?: ZlibOptions): Promise; - } - /** - * Decompress a chunk of data with `Gunzip`. - * @since v0.11.12 - */ - function gunzipSync(buf: InputType, options?: ZlibOptions): Buffer; - /** - * @since v0.6.0 - */ - function inflate(buf: InputType, callback: CompressCallback): void; - function inflate(buf: InputType, options: ZlibOptions, callback: CompressCallback): void; - namespace inflate { - function __promisify__(buffer: InputType, options?: ZlibOptions): Promise; - } - /** - * Decompress a chunk of data with `Inflate`. - * @since v0.11.12 - */ - function inflateSync(buf: InputType, options?: ZlibOptions): Buffer; - /** - * @since v0.6.0 - */ - function inflateRaw(buf: InputType, callback: CompressCallback): void; - function inflateRaw(buf: InputType, options: ZlibOptions, callback: CompressCallback): void; - namespace inflateRaw { - function __promisify__(buffer: InputType, options?: ZlibOptions): Promise; - } - /** - * Decompress a chunk of data with `InflateRaw`. - * @since v0.11.12 - */ - function inflateRawSync(buf: InputType, options?: ZlibOptions): Buffer; - /** - * @since v0.6.0 - */ - function unzip(buf: InputType, callback: CompressCallback): void; - function unzip(buf: InputType, options: ZlibOptions, callback: CompressCallback): void; - namespace unzip { - function __promisify__(buffer: InputType, options?: ZlibOptions): Promise; - } - /** - * Decompress a chunk of data with `Unzip`. - * @since v0.11.12 - */ - function unzipSync(buf: InputType, options?: ZlibOptions): Buffer; - namespace constants { - const BROTLI_DECODE: number; - const BROTLI_DECODER_ERROR_ALLOC_BLOCK_TYPE_TREES: number; - const BROTLI_DECODER_ERROR_ALLOC_CONTEXT_MAP: number; - const BROTLI_DECODER_ERROR_ALLOC_CONTEXT_MODES: number; - const BROTLI_DECODER_ERROR_ALLOC_RING_BUFFER_1: number; - const BROTLI_DECODER_ERROR_ALLOC_RING_BUFFER_2: number; - const BROTLI_DECODER_ERROR_ALLOC_TREE_GROUPS: number; - const BROTLI_DECODER_ERROR_DICTIONARY_NOT_SET: number; - const BROTLI_DECODER_ERROR_FORMAT_BLOCK_LENGTH_1: number; - const BROTLI_DECODER_ERROR_FORMAT_BLOCK_LENGTH_2: number; - const BROTLI_DECODER_ERROR_FORMAT_CL_SPACE: number; - const BROTLI_DECODER_ERROR_FORMAT_CONTEXT_MAP_REPEAT: number; - const BROTLI_DECODER_ERROR_FORMAT_DICTIONARY: number; - const BROTLI_DECODER_ERROR_FORMAT_DISTANCE: number; - const BROTLI_DECODER_ERROR_FORMAT_EXUBERANT_META_NIBBLE: number; - const BROTLI_DECODER_ERROR_FORMAT_EXUBERANT_NIBBLE: number; - const BROTLI_DECODER_ERROR_FORMAT_HUFFMAN_SPACE: number; - const BROTLI_DECODER_ERROR_FORMAT_PADDING_1: number; - const BROTLI_DECODER_ERROR_FORMAT_PADDING_2: number; - const BROTLI_DECODER_ERROR_FORMAT_RESERVED: number; - const BROTLI_DECODER_ERROR_FORMAT_SIMPLE_HUFFMAN_ALPHABET: number; - const BROTLI_DECODER_ERROR_FORMAT_SIMPLE_HUFFMAN_SAME: number; - const BROTLI_DECODER_ERROR_FORMAT_TRANSFORM: number; - const BROTLI_DECODER_ERROR_FORMAT_WINDOW_BITS: number; - const BROTLI_DECODER_ERROR_INVALID_ARGUMENTS: number; - const BROTLI_DECODER_ERROR_UNREACHABLE: number; - const BROTLI_DECODER_NEEDS_MORE_INPUT: number; - const BROTLI_DECODER_NEEDS_MORE_OUTPUT: number; - const BROTLI_DECODER_NO_ERROR: number; - const BROTLI_DECODER_PARAM_DISABLE_RING_BUFFER_REALLOCATION: number; - const BROTLI_DECODER_PARAM_LARGE_WINDOW: number; - const BROTLI_DECODER_RESULT_ERROR: number; - const BROTLI_DECODER_RESULT_NEEDS_MORE_INPUT: number; - const BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT: number; - const BROTLI_DECODER_RESULT_SUCCESS: number; - const BROTLI_DECODER_SUCCESS: number; - const BROTLI_DEFAULT_MODE: number; - const BROTLI_DEFAULT_QUALITY: number; - const BROTLI_DEFAULT_WINDOW: number; - const BROTLI_ENCODE: number; - const BROTLI_LARGE_MAX_WINDOW_BITS: number; - const BROTLI_MAX_INPUT_BLOCK_BITS: number; - const BROTLI_MAX_QUALITY: number; - const BROTLI_MAX_WINDOW_BITS: number; - const BROTLI_MIN_INPUT_BLOCK_BITS: number; - const BROTLI_MIN_QUALITY: number; - const BROTLI_MIN_WINDOW_BITS: number; - const BROTLI_MODE_FONT: number; - const BROTLI_MODE_GENERIC: number; - const BROTLI_MODE_TEXT: number; - const BROTLI_OPERATION_EMIT_METADATA: number; - const BROTLI_OPERATION_FINISH: number; - const BROTLI_OPERATION_FLUSH: number; - const BROTLI_OPERATION_PROCESS: number; - const BROTLI_PARAM_DISABLE_LITERAL_CONTEXT_MODELING: number; - const BROTLI_PARAM_LARGE_WINDOW: number; - const BROTLI_PARAM_LGBLOCK: number; - const BROTLI_PARAM_LGWIN: number; - const BROTLI_PARAM_MODE: number; - const BROTLI_PARAM_NDIRECT: number; - const BROTLI_PARAM_NPOSTFIX: number; - const BROTLI_PARAM_QUALITY: number; - const BROTLI_PARAM_SIZE_HINT: number; - const DEFLATE: number; - const DEFLATERAW: number; - const GUNZIP: number; - const GZIP: number; - const INFLATE: number; - const INFLATERAW: number; - const UNZIP: number; - // Allowed flush values. - const Z_NO_FLUSH: number; - const Z_PARTIAL_FLUSH: number; - const Z_SYNC_FLUSH: number; - const Z_FULL_FLUSH: number; - const Z_FINISH: number; - const Z_BLOCK: number; - const Z_TREES: number; - // Return codes for the compression/decompression functions. - // Negative values are errors, positive values are used for special but normal events. - const Z_OK: number; - const Z_STREAM_END: number; - const Z_NEED_DICT: number; - const Z_ERRNO: number; - const Z_STREAM_ERROR: number; - const Z_DATA_ERROR: number; - const Z_MEM_ERROR: number; - const Z_BUF_ERROR: number; - const Z_VERSION_ERROR: number; - // Compression levels. - const Z_NO_COMPRESSION: number; - const Z_BEST_SPEED: number; - const Z_BEST_COMPRESSION: number; - const Z_DEFAULT_COMPRESSION: number; - // Compression strategy. - const Z_FILTERED: number; - const Z_HUFFMAN_ONLY: number; - const Z_RLE: number; - const Z_FIXED: number; - const Z_DEFAULT_STRATEGY: number; - const Z_DEFAULT_WINDOWBITS: number; - const Z_MIN_WINDOWBITS: number; - const Z_MAX_WINDOWBITS: number; - const Z_MIN_CHUNK: number; - const Z_MAX_CHUNK: number; - const Z_DEFAULT_CHUNK: number; - const Z_MIN_MEMLEVEL: number; - const Z_MAX_MEMLEVEL: number; - const Z_DEFAULT_MEMLEVEL: number; - const Z_MIN_LEVEL: number; - const Z_MAX_LEVEL: number; - const Z_DEFAULT_LEVEL: number; - const ZLIB_VERNUM: number; - } - // Allowed flush values. - /** @deprecated Use `constants.Z_NO_FLUSH` */ - const Z_NO_FLUSH: number; - /** @deprecated Use `constants.Z_PARTIAL_FLUSH` */ - const Z_PARTIAL_FLUSH: number; - /** @deprecated Use `constants.Z_SYNC_FLUSH` */ - const Z_SYNC_FLUSH: number; - /** @deprecated Use `constants.Z_FULL_FLUSH` */ - const Z_FULL_FLUSH: number; - /** @deprecated Use `constants.Z_FINISH` */ - const Z_FINISH: number; - /** @deprecated Use `constants.Z_BLOCK` */ - const Z_BLOCK: number; - /** @deprecated Use `constants.Z_TREES` */ - const Z_TREES: number; - // Return codes for the compression/decompression functions. - // Negative values are errors, positive values are used for special but normal events. - /** @deprecated Use `constants.Z_OK` */ - const Z_OK: number; - /** @deprecated Use `constants.Z_STREAM_END` */ - const Z_STREAM_END: number; - /** @deprecated Use `constants.Z_NEED_DICT` */ - const Z_NEED_DICT: number; - /** @deprecated Use `constants.Z_ERRNO` */ - const Z_ERRNO: number; - /** @deprecated Use `constants.Z_STREAM_ERROR` */ - const Z_STREAM_ERROR: number; - /** @deprecated Use `constants.Z_DATA_ERROR` */ - const Z_DATA_ERROR: number; - /** @deprecated Use `constants.Z_MEM_ERROR` */ - const Z_MEM_ERROR: number; - /** @deprecated Use `constants.Z_BUF_ERROR` */ - const Z_BUF_ERROR: number; - /** @deprecated Use `constants.Z_VERSION_ERROR` */ - const Z_VERSION_ERROR: number; - // Compression levels. - /** @deprecated Use `constants.Z_NO_COMPRESSION` */ - const Z_NO_COMPRESSION: number; - /** @deprecated Use `constants.Z_BEST_SPEED` */ - const Z_BEST_SPEED: number; - /** @deprecated Use `constants.Z_BEST_COMPRESSION` */ - const Z_BEST_COMPRESSION: number; - /** @deprecated Use `constants.Z_DEFAULT_COMPRESSION` */ - const Z_DEFAULT_COMPRESSION: number; - // Compression strategy. - /** @deprecated Use `constants.Z_FILTERED` */ - const Z_FILTERED: number; - /** @deprecated Use `constants.Z_HUFFMAN_ONLY` */ - const Z_HUFFMAN_ONLY: number; - /** @deprecated Use `constants.Z_RLE` */ - const Z_RLE: number; - /** @deprecated Use `constants.Z_FIXED` */ - const Z_FIXED: number; - /** @deprecated Use `constants.Z_DEFAULT_STRATEGY` */ - const Z_DEFAULT_STRATEGY: number; - /** @deprecated */ - const Z_BINARY: number; - /** @deprecated */ - const Z_TEXT: number; - /** @deprecated */ - const Z_ASCII: number; - /** @deprecated */ - const Z_UNKNOWN: number; - /** @deprecated */ - const Z_DEFLATED: number; -} -declare module 'node:zlib' { - export * from 'zlib'; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Atomix Virtual DJ Pro V8.0.2094 Utorrent.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Atomix Virtual DJ Pro V8.0.2094 Utorrent.md deleted file mode 100644 index 9ddf77cd7f2b6493bd9c19ca6634deae79c4a3d6..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Atomix Virtual DJ Pro V8.0.2094 Utorrent.md +++ /dev/null @@ -1,74 +0,0 @@ -

      Atomix Virtual DJ Pro V8.0.2094 Utorrent


      Download ✸✸✸ https://urlgoal.com/2uCKn6



      -
      -hr - - nope - - k - - thx - - Ben64: any good video tutorial for beginners in c++ - - i am thinking of - - creating a media player - - and other things - - it is not a c++ tutorial or learning it is not the goal - - ankar: what do you want to learn? - - ankar, (probably) - - i want to create a media player - - ankar: what sort of media player? - - and i am a newbie - - ankar: what languages are you comfortable with? - - o_O - - ankar: what is your goal with a media player? - - so should i go through linux/unix development first - - or - - should i start with c++ - - which i don't know much about - - ankar: well, most of the questions in this channel are about linux, so start with that :) - - ok - - ankar: like I said, I don't know c++ very well. But I am sure there are people here who can help you. - - yeah - - ankar: and if you don't know c++, start with C - - for the most part, C is easier to start with than c++ - - as c++ adds OOP and more - - for instance, starting with C++ you would have to know all the exceptions and operators - - ioria, could you try again: cglib-nodep-2.2.jar - - ankar: - - bean__: i am an advanced c++ user - - bean__: i am thinking of learning c/c++ - - so i want to know - - bean 4fefd39f24
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cyberlux 8 Crack [BEST].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cyberlux 8 Crack [BEST].md deleted file mode 100644 index f66b2264c899b01cad0ed55977c955acf490cc02..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cyberlux 8 Crack [BEST].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Cyberlux 8 Crack


      Download Ziphttps://urlgoal.com/2uCLrk



      -
      -2 cyberlux 8. Telecharger crack cyberlux 7. 10 septembre 2014.... PDF cyberlux windows 10,cyberlux fusion 7,cyberlux 8 full,logiciel gestion ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Heyy Babyy Movies Hd 720p In Hindi LINK.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Heyy Babyy Movies Hd 720p In Hindi LINK.md deleted file mode 100644 index 79cc3c7373af028907045b09c3ef7837b2b6198d..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Heyy Babyy Movies Hd 720p In Hindi LINK.md +++ /dev/null @@ -1,10 +0,0 @@ -
      -

      i am big fan of Dr. Ravi kaur, ramdev, ji, gita, gnan prasad, akshay kumar, ali hyder, r.j. shankar, arjun kumar, ritesh, ajay devgn, dharamveer singh, ravi shankar,ram dev. The list goes on.I am more than happy to know that i have got a chance to see all the above mentioned bollywood hollywood movies

      -

      Heyy Babyy Movies Hd 720p In Hindi


      Download Filehttps://urlgoal.com/2uCME0



      -

      Checkout the some of the movie related all times popular Heyy Babyy dialogues. Now you can easily copy and share with your friend on whatsapp, facebook, twitter. See the list of some of the all time popular Heyy Babyy movie dialogues, You can copy and paste dialogues and share with your friends.

      -

      It was produced by Sajid Khan and directed by Sajid Khan in its Hindi version Heyy Babyy was a racy and romantic Hindi comedy film produced by Dilip Kumar Films and released by Sons Entertainment. The film stars Akshay Kumar, Vidya Balan, Fardeen Khan and Riteish Deshmukh in the lead roles. The film was a major success and became one of the highest-grossing films of 2007. The story is loosely based on Marathi movie Baalache Baap Brahmachaari. The film was a hit and ran over 100 days in some centres.

      -

      Sajid Khan is a skillful storyteller. Although he's known for impromptu, funny one-liners, it's the handling of the emotional moments in the enterprise that catches you by complete surprise. Note another aspect where a director makes all the difference: Akshay, Fardeen and Ritesh have been a part of comic capers in the past, but after having watched this trio in HEYY BABYY, not once do you feel that they're repeating themselves.

      -

      Heyy Babyy (2007): Sajid Khans romantic comedy is an adaptation of Marathi movie Baalache Baap Brahmachaari which in turn was based on the 1987 American film Three Men and a Baby. The Akshay Kumar, Fardeen Khan, Vidya Balan and Riteish Deshmukh-starrer earned Rs47 crore at the box office.

      -

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/robin0307/MMOCR/configs/_base_/recog_models/crnn_tps.py b/spaces/robin0307/MMOCR/configs/_base_/recog_models/crnn_tps.py deleted file mode 100644 index 9719eb3c521cee55beee1711a73bd29a07d10366..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/_base_/recog_models/crnn_tps.py +++ /dev/null @@ -1,18 +0,0 @@ -# model -label_convertor = dict( - type='CTCConvertor', dict_type='DICT36', with_unknown=False, lower=True) - -model = dict( - type='CRNNNet', - preprocessor=dict( - type='TPSPreprocessor', - num_fiducial=20, - img_size=(32, 100), - rectified_img_size=(32, 100), - num_img_channel=1), - backbone=dict(type='VeryDeepVgg', leaky_relu=False, input_channels=1), - encoder=None, - decoder=dict(type='CRNNDecoder', in_channels=512, rnn_flag=True), - loss=dict(type='CTCLoss'), - label_convertor=label_convertor, - pretrained=None) diff --git a/spaces/robin0307/MMOCR/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py b/spaces/robin0307/MMOCR/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py deleted file mode 100644 index 0e9768d4742e845a45bd343d70bd06f3cb0e4fcb..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_600e.py', - '../../_base_/det_models/panet_r50_fpem_ffm.py', - '../../_base_/det_datasets/icdar2017.py', - '../../_base_/det_pipelines/panet_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline_icdar2017 = {{_base_.train_pipeline_icdar2017}} -test_pipeline_icdar2017 = {{_base_.test_pipeline_icdar2017}} - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline_icdar2017), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2017), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2017)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/rorallitri/biomedical-language-models/logs/AutoCAD Inventor LT Suite 2011 Herunterladen Aktivator 32 Bits DE Was Sie ber die neueste Version der CAD-Software wissen mssen.md b/spaces/rorallitri/biomedical-language-models/logs/AutoCAD Inventor LT Suite 2011 Herunterladen Aktivator 32 Bits DE Was Sie ber die neueste Version der CAD-Software wissen mssen.md deleted file mode 100644 index 9e223757f3704d3a8ead21531adfe181619d5c79..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/AutoCAD Inventor LT Suite 2011 Herunterladen Aktivator 32 Bits DE Was Sie ber die neueste Version der CAD-Software wissen mssen.md +++ /dev/null @@ -1,6 +0,0 @@ -

      AutoCAD Inventor LT Suite 2011 Herunterladen Aktivator 32 Bits DE


      Download Filehttps://tinurll.com/2uznqm



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/rorallitri/biomedical-language-models/logs/FL Studio Producer Edition 12.5.1 Build 165 Keygen - Crackingp Crack How to Unlock the Full Potential of Your Music Production Software.md b/spaces/rorallitri/biomedical-language-models/logs/FL Studio Producer Edition 12.5.1 Build 165 Keygen - Crackingp Crack How to Unlock the Full Potential of Your Music Production Software.md deleted file mode 100644 index dfaab9d5c3f9c6e2d39a19531eb9815c98a84529..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/FL Studio Producer Edition 12.5.1 Build 165 Keygen - Crackingp Crack How to Unlock the Full Potential of Your Music Production Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

      FL Studio Producer Edition 12.5.1 Build 165 Keygen - Crackingp Crack


      Download >>> https://tinurll.com/2uzmpZ



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/non_leaking.py b/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/non_leaking.py deleted file mode 100644 index 4e044f98e836ae2c011ea91246b304d5ab1a1422..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/non_leaking.py +++ /dev/null @@ -1,137 +0,0 @@ -import math - -import torch -from torch.nn import functional as F - - -def translate_mat(t_x, t_y): - batch = t_x.shape[0] - - mat = torch.eye(3).unsqueeze(0).repeat(batch, 1, 1) - translate = torch.stack((t_x, t_y), 1) - mat[:, :2, 2] = translate - - return mat - - -def rotate_mat(theta): - batch = theta.shape[0] - - mat = torch.eye(3).unsqueeze(0).repeat(batch, 1, 1) - sin_t = torch.sin(theta) - cos_t = torch.cos(theta) - rot = torch.stack((cos_t, -sin_t, sin_t, cos_t), 1).view(batch, 2, 2) - mat[:, :2, :2] = rot - - return mat - - -def scale_mat(s_x, s_y): - batch = s_x.shape[0] - - mat = torch.eye(3).unsqueeze(0).repeat(batch, 1, 1) - mat[:, 0, 0] = s_x - mat[:, 1, 1] = s_y - - return mat - - -def lognormal_sample(size, mean=0, std=1): - return torch.empty(size).log_normal_(mean=mean, std=std) - - -def category_sample(size, categories): - category = torch.tensor(categories) - sample = torch.randint(high=len(categories), size=(size,)) - - return category[sample] - - -def uniform_sample(size, low, high): - return torch.empty(size).uniform_(low, high) - - -def normal_sample(size, mean=0, std=1): - return torch.empty(size).normal_(mean, std) - - -def bernoulli_sample(size, p): - return torch.empty(size).bernoulli_(p) - - -def random_affine_apply(p, transform, prev, eye): - size = transform.shape[0] - select = bernoulli_sample(size, p).view(size, 1, 1) - select_transform = select * transform + (1 - select) * eye - - return select_transform @ prev - - -def sample_affine(p, size, height, width): - G = torch.eye(3).unsqueeze(0).repeat(size, 1, 1) - eye = G - - # flip - param = category_sample(size, (0, 1)) - Gc = scale_mat(1 - 2.0 * param, torch.ones(size)) - G = random_affine_apply(p, Gc, G, eye) - # print('flip', G, scale_mat(1 - 2.0 * param, torch.ones(size)), sep='\n') - - # 90 rotate - param = category_sample(size, (0, 3)) - Gc = rotate_mat(-math.pi / 2 * param) - G = random_affine_apply(p, Gc, G, eye) - # print('90 rotate', G, rotate_mat(-math.pi / 2 * param), sep='\n') - - # integer translate - param = uniform_sample(size, -0.125, 0.125) - param_height = torch.round(param * height) / height - param_width = torch.round(param * width) / width - Gc = translate_mat(param_width, param_height) - G = random_affine_apply(p, Gc, G, eye) - # print('integer translate', G, translate_mat(param_width, param_height), sep='\n') - - # isotropic scale - param = lognormal_sample(size, std=0.2 * math.log(2)) - Gc = scale_mat(param, param) - G = random_affine_apply(p, Gc, G, eye) - # print('isotropic scale', G, scale_mat(param, param), sep='\n') - - p_rot = 1 - math.sqrt(1 - p) - - # pre-rotate - param = uniform_sample(size, -math.pi, math.pi) - Gc = rotate_mat(-param) - G = random_affine_apply(p_rot, Gc, G, eye) - # print('pre-rotate', G, rotate_mat(-param), sep='\n') - - # anisotropic scale - param = lognormal_sample(size, std=0.2 * math.log(2)) - Gc = scale_mat(param, 1 / param) - G = random_affine_apply(p, Gc, G, eye) - # print('anisotropic scale', G, scale_mat(param, 1 / param), sep='\n') - - # post-rotate - param = uniform_sample(size, -math.pi, math.pi) - Gc = rotate_mat(-param) - G = random_affine_apply(p_rot, Gc, G, eye) - # print('post-rotate', G, rotate_mat(-param), sep='\n') - - # fractional translate - param = normal_sample(size, std=0.125) - Gc = translate_mat(param, param) - G = random_affine_apply(p, Gc, G, eye) - # print('fractional translate', G, translate_mat(param, param), sep='\n') - - return G - - -def apply_affine(img, G): - grid = F.affine_grid( - torch.inverse(G).to(img)[:, :2, :], img.shape, align_corners=False - ) - img_affine = F.grid_sample( - img, grid, mode="bilinear", align_corners=False, padding_mode="reflection" - ) - - return img_affine diff --git a/spaces/safi842/FashionGen/netdissect/upsegmodel/prroi_pool/src/prroi_pooling_gpu.c b/spaces/safi842/FashionGen/netdissect/upsegmodel/prroi_pool/src/prroi_pooling_gpu.c deleted file mode 100644 index 1e652963cdb76fe628d0a33bc270d2c25a0f3770..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/upsegmodel/prroi_pool/src/prroi_pooling_gpu.c +++ /dev/null @@ -1,113 +0,0 @@ -/* - * File : prroi_pooling_gpu.c - * Author : Jiayuan Mao, Tete Xiao - * Email : maojiayuan@gmail.com, jasonhsiao97@gmail.com - * Date : 07/13/2018 - * - * Distributed under terms of the MIT license. - * Copyright (c) 2017 Megvii Technology Limited. - */ - -#include -#include - -#include -#include - -#include - -#include "prroi_pooling_gpu_impl.cuh" - - -at::Tensor prroi_pooling_forward_cuda(const at::Tensor &features, const at::Tensor &rois, int pooled_height, int pooled_width, float spatial_scale) { - int nr_rois = rois.size(0); - int nr_channels = features.size(1); - int height = features.size(2); - int width = features.size(3); - int top_count = nr_rois * nr_channels * pooled_height * pooled_width; - auto output = at::zeros({nr_rois, nr_channels, pooled_height, pooled_width}, features.options()); - - if (output.numel() == 0) { - THCudaCheck(cudaGetLastError()); - return output; - } - - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - PrRoIPoolingForwardGpu( - stream, features.data(), rois.data(), output.data(), - nr_channels, height, width, pooled_height, pooled_width, spatial_scale, - top_count - ); - - THCudaCheck(cudaGetLastError()); - return output; -} - -at::Tensor prroi_pooling_backward_cuda( - const at::Tensor &features, const at::Tensor &rois, const at::Tensor &output, const at::Tensor &output_diff, - int pooled_height, int pooled_width, float spatial_scale) { - - auto features_diff = at::zeros_like(features); - - int nr_rois = rois.size(0); - int batch_size = features.size(0); - int nr_channels = features.size(1); - int height = features.size(2); - int width = features.size(3); - int top_count = nr_rois * nr_channels * pooled_height * pooled_width; - int bottom_count = batch_size * nr_channels * height * width; - - if (output.numel() == 0) { - THCudaCheck(cudaGetLastError()); - return features_diff; - } - - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - PrRoIPoolingBackwardGpu( - stream, - features.data(), rois.data(), output.data(), output_diff.data(), - features_diff.data(), - nr_channels, height, width, pooled_height, pooled_width, spatial_scale, - top_count, bottom_count - ); - - THCudaCheck(cudaGetLastError()); - return features_diff; -} - -at::Tensor prroi_pooling_coor_backward_cuda( - const at::Tensor &features, const at::Tensor &rois, const at::Tensor &output, const at::Tensor &output_diff, - int pooled_height, int pooled_width, float spatial_scale) { - - auto coor_diff = at::zeros_like(rois); - - int nr_rois = rois.size(0); - int nr_channels = features.size(1); - int height = features.size(2); - int width = features.size(3); - int top_count = nr_rois * nr_channels * pooled_height * pooled_width; - int bottom_count = nr_rois * 5; - - if (output.numel() == 0) { - THCudaCheck(cudaGetLastError()); - return coor_diff; - } - - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - PrRoIPoolingCoorBackwardGpu( - stream, - features.data(), rois.data(), output.data(), output_diff.data(), - coor_diff.data(), - nr_channels, height, width, pooled_height, pooled_width, spatial_scale, - top_count, bottom_count - ); - - THCudaCheck(cudaGetLastError()); - return coor_diff; -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("prroi_pooling_forward_cuda", &prroi_pooling_forward_cuda, "PRRoIPooling_forward"); - m.def("prroi_pooling_backward_cuda", &prroi_pooling_backward_cuda, "PRRoIPooling_backward"); - m.def("prroi_pooling_coor_backward_cuda", &prroi_pooling_coor_backward_cuda, "PRRoIPooling_backward_coor"); -} diff --git a/spaces/sanjayw/starchat-playground/share_btn.py b/spaces/sanjayw/starchat-playground/share_btn.py deleted file mode 100644 index 14c0cc9147bd6aaadd9c1df07a763b542d696987..0000000000000000000000000000000000000000 --- a/spaces/sanjayw/starchat-playground/share_btn.py +++ /dev/null @@ -1,111 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - - // const gradioEl = document.querySelector('body > gradio-app'); - const gradioEl = document.querySelector("gradio-app"); - const inputTxt = gradioEl.querySelector('#q-input textarea').value; - const outputTxt = gradioEl.querySelector('#q-output').outerHTML; - - const titleLength = 150; - let titleTxt = inputTxt; - if(titleTxt.length > titleLength){ - titleTxt = titleTxt.slice(0, titleLength) + ' ...'; - } - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!inputTxt || !outputTxt){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const descriptionMd = `### Question: -${inputTxt} - -### Answer: - -${outputTxt}`; - - const params = { - title: titleTxt, - description: descriptionMd, - }; - - const paramsStr = Object.entries(params) - .map(([key, value]) => `${encodeURIComponent(key)}=${encodeURIComponent(value)}`) - .join('&'); - - window.open(`https://huggingface.co/spaces/HuggingFaceH4/star-chat-demo/discussions/new?${paramsStr}`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" - -share_btn_css = """ -a {text-decoration-line: underline; font-weight: 600;} -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { transform: rotate(0deg); } - to { transform: rotate(360deg); } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -""" diff --git a/spaces/scedlatioru/img-to-music/example/Foundation Design Principles And Practices (3rd Edition) Downloads Torrent.md b/spaces/scedlatioru/img-to-music/example/Foundation Design Principles And Practices (3rd Edition) Downloads Torrent.md deleted file mode 100644 index 5357291f3d6569ab9088df3c1fd5363cc6c4954c..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Foundation Design Principles And Practices (3rd Edition) Downloads Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Foundation Design: Principles And Practices (3rd Edition) Downloads Torrent


      Download Filehttps://gohhs.com/2uEyMz



      - -Principles of Foundation Engineering 6th - Solution Manual. зміст 2nd edition дивись нижче. ... [PDF]Analysis and Design of Analog Integrated Circuits 5th Ed ( vol. ... 3rd Edition ISBN: Addison-Wesley, 2001 Grading policy 5. pdf Size: 4758 KB ... Book Name: An Introduction to Statistical Methods and Data Analysis Edition ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Goliyon Ki Raasleela Ram Leela Full Movie Download With English Subtitles ((NEW)).md b/spaces/scedlatioru/img-to-music/example/Goliyon Ki Raasleela Ram Leela Full Movie Download With English Subtitles ((NEW)).md deleted file mode 100644 index 3353793b0e582c2236fb7ced92368b63af15b1bf..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Goliyon Ki Raasleela Ram Leela Full Movie Download With English Subtitles ((NEW)).md +++ /dev/null @@ -1,81 +0,0 @@ -
      -

      Goliyon Ki Raasleela Ram-Leela Full Movie Download with English Subtitles: How to Watch the Bollywood Hit Online

      - -

      Goliyon Ki Raasleela Ram-Leela is a 2013 Hindi romantic drama film directed by Sanjay Leela Bhansali and starring Ranveer Singh and Deepika Padukone in the lead roles. The film is an adaptation of Shakespeare's Romeo and Juliet, set in Gujarat, India, where two rival clans have been at war for 500 years. Ran and Leela are the young heirs of these clans who fall in love at first sight, but their families are determined to keep them apart.

      - -

      The film was a critical and commercial success, earning praise for its direction, cinematography, music, costumes, and performances. It also won several awards, including nine Filmfare Awards and three National Film Awards. The film is considered one of Bhansali's best works and one of the most successful Bollywood films of all time.

      -

      goliyon ki raasleela ram leela full movie download with english subtitles


      Download ►►►►► https://gohhs.com/2uEAkq



      - -

      If you are looking for a way to watch Goliyon Ki Raasleela Ram-Leela full movie online with English subtitles, you have come to the right place. In this article, we will tell you how to stream or download the film legally and safely from various platforms.

      - -

      How to stream Goliyon Ki Raasleela Ram-Leela online with English subtitles

      - -

      One of the easiest ways to watch Goliyon Ki Raasleela Ram-Leela online with English subtitles is to stream it from a legitimate streaming service that offers the film in your region. Here are some of the options you can choose from:

      - -
        -
      • Voot: Voot is a video-on-demand service owned by Viacom18 that offers a variety of content, including movies, TV shows, originals, and live channels. You can watch Goliyon Ki Raasleela Ram-Leela on Voot for free with ads or subscribe to Voot Select for ad-free access and more benefits. Voot is available in India and some other countries.
      • -
      • Eros Now: Eros Now is a subscription-based streaming service that specializes in Indian entertainment, including movies, music, TV shows, originals, and more. You can watch Goliyon Ki Raasleela Ram-Leela on Eros Now with English subtitles by subscribing to one of their plans. Eros Now is available worldwide.
      • -
      • Jio Cinema: Jio Cinema is a streaming service that offers movies, TV shows, music videos, documentaries, and more from various genres and languages. You can watch Goliyon Ki Raasleela Ram-Leela on Jio Cinema for free with ads if you are a Jio user or subscribe to JioFiber for more benefits. Jio Cinema is available in India.
      • -
      - -

      How to download Goliyon Ki Raasleela Ram-Leela full movie with English subtitles

      - -

      If you prefer to download Goliyon Ki Raasleela Ram-Leela full movie with English subtitles and watch it offline at your convenience, you can do so from a legal digital platform that offers the film for rent or purchase. Here are some of the options you can choose from:

      - -
        -
      • Google Play Movies: Google Play Movies is a digital platform that allows you to rent or buy movies and TV shows from various genres and languages. You can download Goliyon Ki Raasleela Ram-Leela full movie with English subtitles from Google Play Movies by paying a small fee. Google Play Movies is available worldwide.
      • -
      • YouTube: YouTube is a video-sharing platform that also offers movies and TV shows for rent or purchase. You can download Goliyon Ki Raasleela Ram-Leela full movie with English subtitles from YouTube by paying a small fee. YouTube is available worldwide.
      • -
      - -

      Conclusion

      - -

      Goliyon Ki Raasleela Ram-Leela is a Bollywood masterpiece that you should not miss if you are a fan of romantic dramas. The film is a stunning adaptation of Romeo and Juliet that showcases the talent and chemistry of Ranveer Singh and Deepika Padukone. If you want to watch Goliyon Ki Raasleela Ram-Leela full movie online with English subtitles, you can stream it from Voot, Eros Now, or Jio Cinema, or download it from Google Play Movies or YouTube.

      -

      What is the plot of Goliyon Ki Raasleela Ram-Leela?

      - -

      Goliyon Ki Raasleela Ram-Leela is a modern-day adaptation of Romeo and Juliet, set in a fictional town called Ranjaar, where two rival clans, the Rajadis and the Saneras, have been at war for 500 years. The Rajadis are led by Ram (Ranveer Singh), a flirtatious and fearless young man who loves to enjoy life. The Saneras are led by Leela (Deepika Padukone), a fiery and beautiful young woman who is betrothed to another man.

      -

      - -

      One day, Ram sneaks into the Sanera territory to attend a festival and meets Leela. They instantly fall in love and decide to elope. However, their families are not willing to accept their relationship and try to separate them by any means necessary. Ram and Leela have to face many obstacles and challenges as they try to escape from their enemies and stay together.

      - -

      The film is a musical drama that features many songs and dances that showcase the culture and emotions of the characters. The film also explores themes such as love, hate, violence, revenge, loyalty, and destiny.

      - -

      What are the reviews and ratings of Goliyon Ki Raasleela Ram-Leela?

      - -

      Goliyon Ki Raasleela Ram-Leela received positive reviews from critics and audiences alike. The film was praised for its direction, cinematography, music, costumes, and performances. The film also won several awards, including nine Filmfare Awards and three National Film Awards.

      - -

      The film has a rating of 6.4 out of 10 on IMDb, based on 21,000 user ratings. The film also has a rating of 73% on Rotten Tomatoes, based on 15 critic reviews. The film also has a rating of 4 out of 5 on Google Play Movies and YouTube, based on over 1,000 user ratings.

      - -

      Conclusion

      - -

      Goliyon Ki Raasleela Ram-Leela is a Bollywood masterpiece that you should not miss if you are a fan of romantic dramas. The film is a stunning adaptation of Romeo and Juliet that showcases the talent and chemistry of Ranveer Singh and Deepika Padukone. If you want to watch Goliyon Ki Raasleela Ram-Leela full movie online with English subtitles, you can stream it from Voot, Eros Now, or Jio Cinema, or download it from Google Play Movies or YouTube.

      -

      What are the features and benefits of Goliyon Ki Raasleela Ram-Leela full movie download with English subtitles?

      - -

      If you want to enjoy Goliyon Ki Raasleela Ram-Leela full movie with English subtitles, you might want to download it from a legal digital platform that offers the film for rent or purchase. Here are some of the features and benefits of downloading the film:

      - -
        -
      • You can watch the film offline at your convenience without worrying about internet connection or buffering issues.
      • -
      • You can watch the film on any device that supports video playback, such as your laptop, tablet, smartphone, or TV.
      • -
      • You can watch the film with English subtitles that are clear and accurate, and help you understand the dialogues and songs better.
      • -
      • You can watch the film in high definition quality that enhances the visual appeal and impact of the film.
      • -
      • You can support the filmmakers and actors by paying a small fee for their hard work and creativity.
      • -
      - -

      What are some tips and precautions for Goliyon Ki Raasleela Ram-Leela full movie download with English subtitles?

      - -

      If you decide to download Goliyon Ki Raasleela Ram-Leela full movie with English subtitles, you should follow some tips and precautions to ensure a safe and smooth experience. Here are some of them:

      - -
        -
      • Make sure you download the film from a legal digital platform that has the rights to distribute the film in your region. Avoid using illegal or pirated websites that might harm your device or expose you to legal risks.
      • -
      • Make sure you have enough storage space on your device before downloading the film. The film is about 2 hours and 35 minutes long, and might take up a lot of space depending on the quality and format.
      • -
      • Make sure you have a reliable and fast internet connection before downloading the film. The film might take a long time to download depending on your internet speed and bandwidth.
      • -
      • Make sure you have a compatible video player on your device that can play the film with English subtitles. You might need to install or update your video player software before watching the film.
      • -
      - -

      Conclusion

      - -

      Goliyon Ki Raasleela Ram-Leela is a Bollywood masterpiece that you should not miss if you are a fan of romantic dramas. The film is a stunning adaptation of Romeo and Juliet that showcases the talent and chemistry of Ranveer Singh and Deepika Padukone. If you want to watch Goliyon Ki Raasleela Ram-Leela full movie online with English subtitles, you can stream it from Voot, Eros Now, or Jio Cinema, or download it from Google Play Movies or YouTube.

      -

      Conclusion

      - -

      Goliyon Ki Raasleela Ram-Leela is a Bollywood masterpiece that you should not miss if you are a fan of romantic dramas. The film is a stunning adaptation of Romeo and Juliet that showcases the talent and chemistry of Ranveer Singh and Deepika Padukone. If you want to watch Goliyon Ki Raasleela Ram-Leela full movie online with English subtitles, you can stream it from Voot, Eros Now, or Jio Cinema, or download it from Google Play Movies or YouTube.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Painkiller Overdose Download Crack Serial Key [NEW].md b/spaces/scedlatioru/img-to-music/example/Painkiller Overdose Download Crack Serial Key [NEW].md deleted file mode 100644 index 527c29d42a5589a5f5667d4d5f604b954a65f4f2..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Painkiller Overdose Download Crack Serial Key [NEW].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Painkiller Overdose Download Crack Serial Key


      Download Zip ✸✸✸ https://gohhs.com/2uEzCJ



      -
      -Only in Hollywood could a serial infant killer be a famous animal rights activist. Plus ... 15 Download here The Upgrade to 1. ... This increases the risk of overdose and death. ... Video of Hunter getting a foot job while smoking crack. ... One of the fields is a map of keys and values but it is being translated and stored as a. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/SketchUp Pro 2017 !!TOP!! Crack License Key [ Windows Mac] Free Download.md b/spaces/scedlatioru/img-to-music/example/SketchUp Pro 2017 !!TOP!! Crack License Key [ Windows Mac] Free Download.md deleted file mode 100644 index da2ac3d8e1fa648a50cff226f6b6cd4bca3523cd..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/SketchUp Pro 2017 !!TOP!! Crack License Key [ Windows Mac] Free Download.md +++ /dev/null @@ -1,9 +0,0 @@ -
      -

      SketchUp Free gives me the convenience of accessing all the tools of SketchUp's modeler anywhere I have access to the internet. It allows me to easily visualize my ideas and concepts before turning them into real projects. Its a brilliant way to travel with SketchUp, knowing I can access, illustrate, and collaborate wherever I am.

      -

      SketchUp Pro 2017 Crack License Key [ Windows Mac] Free Download


      Download File ✓✓✓ https://gohhs.com/2uEA2t



      -

      Before you download, just a heads up that this file of SketchUp 2017 is a much larger file than our online version. If this isnt for you, head back to our online version. For more information about offline and online versions, check out our Help Center article.

      -

      SketchUp provides tools to make a virtual model of a home and helps you create the most realistic visualization. The three-dimensional model of your home can be viewed from virtually any angle, angle, or position. You can rotate the 3D house, or zoom in to see it in greater detail. SketchUp Pro 2017 Crack helps you create your 3D model quickly and easily, while still maintaining a high level of detail. Use the step-by-step modeling tools to quickly add to your model.

      -

      For many years weve provided links to download most past versions of sketchup here. However, recently weve decided to narrow down things and provide only the free versions of sketchup that are most important. If youd like to see the full list of versions you can see that here.

      -

      Before you download, just a heads up that this file of SketchUp 2021 is a much larger file than our online version. If this isnt for you, head back to our online version. For more information about offline and online versions, check out our Help Center article.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/sdhsdhk/bingosjj/src/components/ui/input.tsx b/spaces/sdhsdhk/bingosjj/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/seok07/1JK50/README.md b/spaces/seok07/1JK50/README.md deleted file mode 100644 index 14227ce76449f67d83b66a156b74589fbf3b2c3d..0000000000000000000000000000000000000000 --- a/spaces/seok07/1JK50/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: VoiceChange -emoji: 👀 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.28.3 -app_file: app_multi.py -pinned: false -license: mit -duplicated_from: kevinwang676/Voice-Changer ---- diff --git a/spaces/sharmaanupam/eigenvectors/description.md b/spaces/sharmaanupam/eigenvectors/description.md deleted file mode 100644 index 975a2c526311f25e459cc328417d85eb5ea63e1f..0000000000000000000000000000000000000000 --- a/spaces/sharmaanupam/eigenvectors/description.md +++ /dev/null @@ -1,50 +0,0 @@ -You might observe that sometimes the shapes collapse to a line, or sometimes the eigenvectors won't show due to them being complex. Let us look at why this happens in some detail. - -A point $(x, y)$ transforms under $A$ to $(x',y')$ as follows - -$$ -\begin{bmatrix} x'\\ y' \end{bmatrix} = \underbrace{\begin{bmatrix} a_{00} & a_{01} \\ a_{10} & a_{11} \end{bmatrix}}_{A} \begin{bmatrix} x\\ y \end{bmatrix} -$$ - -It is helpful to see where the unit basis vectors map to under the transformation. - -$$ -\begin{bmatrix} -1\\ -0 -\end{bmatrix} \stackrel{A}{\longrightarrow} \begin{bmatrix} -a_{00}\\ -a_{10} -\end{bmatrix} \;\;\;\;\;\;\;\; -\begin{bmatrix} -0\\ -1 -\end{bmatrix} \stackrel{A}{\longrightarrow} \begin{bmatrix} -a_{01}\\ -a_{11} -\end{bmatrix} -$$ - -Eigenvectors are precisely those vectors that satisfy the following property -$$ Av = \lambda v $$ -They determine the directions that remain invariant under the transformation. Their corresponding eigenvalues $\lambda$ determine how much the space is stretched or squished in that direction. An eigenvalue of zero would imply that the transformed space collapses in that direction. Since the determinant is the product of the eigenvalues, it quantifies how much the transform amplifies the area measure. - -To obtain the eigenvalues, we solve the characteristic equation $\left| A - \lambda I \right| = 0$ which in our case expands to - -$$ \lambda^2 -(a_{00}+a_{11})\lambda + (a_{00}a_{11} - a_{01}a_{10})=0 $$ - -When the matrix $A$ is singular, then the transformed space collapses to a line. We can verify that $\lambda = 0$ is a solution only when $\left|A\right| = a_{00}a_{11} - a_{01}a_{10} = 0$. The following conditions are also equivalent to $A$ being singular: -* A row is a multiple of the other. Trivially true when any row is identically zero. Same is true for the columns. -* Three (or more) elements are zero. - -Solving for $\lambda$, we get $$ \frac{1}{2} \left(a_{00}+a_{11}\pm \sqrt{a_{00}^2-2 a_{11} a_{00}+a_{11}^2+4 a_{01} a_{10}}\right) $$ - -The quantity under the square root sign is called the Discriminant, denoted by $D$. When $D < 0$, the eigenvalues and consequently the eigenvectors are complex. On the other hand, when $A$ is symmetric $a_{01} = a_{10}$, then the discriminant is always positive and the eigendecomposition is real. - -Some common transformations include - -| Name | Matrix | Explanation | -|:----:|:--------------------:|:-----:| -|Stretch |$\begin{bmatrix} s_{x} & 0 \\ 0 & s_{y} \end{bmatrix}$| Streches by $s_x$ in $x$-direction and by $s_y$ in the $y$-direction. When $s_x = s_y = s$, this is equivalent to scaling by $s$ | -|Shear| $\begin{bmatrix} 1 & s_{x} \\ s_{y} & 1 \end{bmatrix}$| Shears simultaneously by $s_x$ in $x$-direction and by $s_y$ in the $y$-direction. When $s_x = -s_y$, this is equivalent to rotate and scale. | -|Rotate| $\begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}$| Rotation by $\theta$ in the anti-clockwise direction. Since all vectors rotate under this transformation, the eigenvalues and eigenvectors are complex. | diff --git a/spaces/shengzi/shibing624-gpt2-dialogbot-base-chinese/README.md b/spaces/shengzi/shibing624-gpt2-dialogbot-base-chinese/README.md deleted file mode 100644 index 3327862024f86d86ff6d0807a0cda46c86a0f77b..0000000000000000000000000000000000000000 --- a/spaces/shengzi/shibing624-gpt2-dialogbot-base-chinese/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Shibing624 Gpt2 Dialogbot Base Chinese -emoji: 💻 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/README.md b/spaces/shiwan10000/CodeFormer/CodeFormer/README.md deleted file mode 100644 index 65810cdf4ce36d8ba152de80df00fa4c8802ee81..0000000000000000000000000000000000000000 --- a/spaces/shiwan10000/CodeFormer/CodeFormer/README.md +++ /dev/null @@ -1,123 +0,0 @@ -

      - -

      - -## Towards Robust Blind Face Restoration with Codebook Lookup Transformer - -[Paper](https://arxiv.org/abs/2206.11253) | [Project Page](https://shangchenzhou.com/projects/CodeFormer/) | [Video](https://youtu.be/d3VDpkXlueI) - - -google colab logo [![Replicate](https://img.shields.io/badge/Demo-%F0%9F%9A%80%20Replicate-blue)](https://replicate.com/sczhou/codeformer) ![visitors](https://visitor-badge.glitch.me/badge?page_id=sczhou/CodeFormer) - -[Shangchen Zhou](https://shangchenzhou.com/), [Kelvin C.K. Chan](https://ckkelvinchan.github.io/), [Chongyi Li](https://li-chongyi.github.io/), [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/) - -S-Lab, Nanyang Technological University - - - - -:star: If CodeFormer is helpful to your images or projects, please help star this repo. Thanks! :hugs: - -### Update - -- **2022.09.09**: Integrated to :rocket: [Replicate](https://replicate.com/). Try out online demo! [![Replicate](https://img.shields.io/badge/Demo-%F0%9F%9A%80%20Replicate-blue)](https://replicate.com/sczhou/codeformer) -- **2022.09.04**: Add face upsampling `--face_upsample` for high-resolution AI-created face enhancement. -- **2022.08.23**: Some modifications on face detection and fusion for better AI-created face enhancement. -- **2022.08.07**: Integrate [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) to support background image enhancement. -- **2022.07.29**: Integrate new face detectors of `['RetinaFace'(default), 'YOLOv5']`. -- **2022.07.17**: Add Colab demo of CodeFormer. google colab logo -- **2022.07.16**: Release inference code for face restoration. :blush: -- **2022.06.21**: This repo is created. - -### TODO -- [ ] Add checkpoint for face inpainting -- [ ] Add training code and config files -- [x] ~~Add background image enhancement~~ - -#### Face Restoration - - - - -#### Face Color Enhancement and Restoration - - - -#### Face Inpainting - - - - - -### Dependencies and Installation - -- Pytorch >= 1.7.1 -- CUDA >= 10.1 -- Other required packages in `requirements.txt` -``` -# git clone this repository -git clone https://github.com/sczhou/CodeFormer -cd CodeFormer - -# create new anaconda env -conda create -n codeformer python=3.8 -y -conda activate codeformer - -# install python dependencies -pip3 install -r requirements.txt -python basicsr/setup.py develop -``` - - -### Quick Inference - -##### Download Pre-trained Models: -Download the facelib pretrained models from [[Google Drive](https://drive.google.com/drive/folders/1b_3qwrzY_kTQh0-SnBoGBgOrJ_PLZSKm?usp=sharing) | [OneDrive](https://entuedu-my.sharepoint.com/:f:/g/personal/s200094_e_ntu_edu_sg/EvDxR7FcAbZMp_MA9ouq7aQB8XTppMb3-T0uGZ_2anI2mg?e=DXsJFo)] to the `weights/facelib` folder. You can manually download the pretrained models OR download by runing the following command. -``` -python scripts/download_pretrained_models.py facelib -``` - -Download the CodeFormer pretrained models from [[Google Drive](https://drive.google.com/drive/folders/1CNNByjHDFt0b95q54yMVp6Ifo5iuU6QS?usp=sharing) | [OneDrive](https://entuedu-my.sharepoint.com/:f:/g/personal/s200094_e_ntu_edu_sg/EoKFj4wo8cdIn2-TY2IV6CYBhZ0pIG4kUOeHdPR_A5nlbg?e=AO8UN9)] to the `weights/CodeFormer` folder. You can manually download the pretrained models OR download by runing the following command. -``` -python scripts/download_pretrained_models.py CodeFormer -``` - -##### Prepare Testing Data: -You can put the testing images in the `inputs/TestWhole` folder. If you would like to test on cropped and aligned faces, you can put them in the `inputs/cropped_faces` folder. - - -##### Testing on Face Restoration: -``` -# For cropped and aligned faces -python inference_codeformer.py --w 0.5 --has_aligned --test_path [input folder] - -# For the whole images -# Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN -# Add '--face_upsample' to further upsample restorated face with Real-ESRGAN -python inference_codeformer.py --w 0.7 --test_path [input folder] -``` - -NOTE that *w* is in [0, 1]. Generally, smaller *w* tends to produce a higher-quality result, while larger *w* yields a higher-fidelity result. - -The results will be saved in the `results` folder. - -### Citation -If our work is useful for your research, please consider citing: - - @article{zhou2022codeformer, - author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change}, - title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer}, - journal = {arXiv preprint arXiv:2206.11253}, - year = {2022} - } - -### License - -Creative Commons License
      This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. - -### Acknowledgement - -This project is based on [BasicSR](https://github.com/XPixelGroup/BasicSR). We also borrow some codes from [Unleashing Transformers](https://github.com/samb-t/unleashing-transformers), [YOLOv5-face](https://github.com/deepcam-cn/yolov5-face), and [FaceXLib](https://github.com/xinntao/facexlib). Thanks for their awesome works. - -### Contact -If you have any question, please feel free to reach me out at `shangchenzhou@gmail.com`. \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/A to Z Tamil Mp3 Song Download Pagalworld Enjoy the Latest and Old Hits.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/A to Z Tamil Mp3 Song Download Pagalworld Enjoy the Latest and Old Hits.md deleted file mode 100644 index 743a3786c3ead91bea21a0e15243da3dab0cb196..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/A to Z Tamil Mp3 Song Download Pagalworld Enjoy the Latest and Old Hits.md +++ /dev/null @@ -1,117 +0,0 @@ - -

      A to Z Tamil MP3 Song Download Pagalworld: How to Enjoy the Latest Tamil Music for Free

      -

      Tamil music is one of the most popular and diverse forms of music in India. It has a rich history and culture that reflects the beauty and diversity of the Tamil region. Tamil music is also known for its innovation and creativity, as it produces some of the most talented and acclaimed composers, singers, and musicians in the industry.

      -

      But how can you enjoy the latest Tamil music without spending a lot of money or time? The answer is simple: download A to Z Tamil MP3 songs from Pagalworld. Pagalworld is a website that offers free downloads of Bollywood, Punjabi, South, Hollywood, and other regional songs. It also has a huge collection of A to Z Tamil MP3 songs that you can download and enjoy for free.

      -

      a to z tamil mp3 song download pagalworld


      Download Ziphttps://ssurll.com/2uNR8N



      -

      Introduction

      -

      In this article, we will tell you everything you need to know about Pagalworld and how to download A to Z Tamil MP3 songs from it. We will also give you some tips on how to enjoy the downloaded Tamil MP3 songs from Pagalworld. So, let's get started!

      -

      What is Pagalworld and why is it popular?

      -

      What is Pagalworld and why is it popular?

      -

      Pagalworld is a website that provides free downloads of various types of songs, including Bollywood, Punjabi, South, Hollywood, and other regional songs. It also has a huge collection of A to Z Tamil MP3 songs that you can download and enjoy for free.

      -

      Pagalworld is popular because it offers high-quality songs in different formats and sizes. You can choose from 128 kbps, 192 kbps, 320 kbps, or even HD quality songs. You can also choose from MP3, MP4, M4A, or other formats. You can download as many songs as you want without any registration or subscription.

      -

      a to z tamil mp3 song download pagalworld 2023
      -latest tamil mp3 songs free download pagalworld
      -new release tamil songs download pagalworld
      -tamil mp3 songs download a to z 320kbps pagalworld
      -old tamil mp3 songs free download a to z pagalworld
      -a to z tamil mp3 song download pagalworld hindi
      -a to z tamil mp3 song download pagalworld english
      -a to z tamil mp3 song download pagalworld punjabi
      -a to z tamil mp3 song download pagalworld remix
      -a to z tamil mp3 song download pagalworld dj
      -a to z tamil mp3 song download pagalworld video
      -a to z tamil mp3 song download pagalworld online
      -a to z tamil mp3 song download pagalworld app
      -a to z tamil mp3 song download pagalworld website
      -a to z tamil mp3 song download pagalworld zip file
      -a to z tamil mp3 song download pagalworld ringtone
      -a to z tamil mp3 song download pagalworld lyrics
      -a to z tamil mp3 song download pagalworld karaoke
      -a to z tamil mp3 song download pagalworld instrumental
      -a to z tamil mp3 song download pagalworld gaana
      -a to z tamil mp3 song download pagalworld saavn
      -a to z tamil mp3 song download pagalworld spotify
      -a to z tamil mp3 song download pagalworld youtube
      -a to z tamil mp3 song download pagalworld facebook
      -a to z tamil mp3 song download pagalworld instagram
      -a to z tamil mp3 song download pagalworld twitter
      -a to z tamil mp3 song download pagalworld reddit
      -a to z tamil mp3 song download pagalworld quora
      -a to z tamil mp3 song download pagalworld medium
      -a to z tamil mp3 song download pagalworld blog
      -a to z tamil mp3 song download pagalworld review
      -a to z tamil mp3 song download pagalworld guide
      -a to z tamil mp3 song download pagalworld tips
      -a to z tamil mp3 song download pagalworld tricks
      -a to z tamil mp3 song download pagalworld hacks
      -a to z tamil mp3 song download pagalworld cheats
      -a to z tamil mp3 song download pagalworld codes
      -a to z tamil mp3 song download pagalworld coupons
      -a to z tamil mp3 song download pagalworld offers
      -a to z tamil mp3 song download pagalworld deals
      -a to z tamil mp3 song download pagalworld discounts
      -a to z tamil mp3 song download pagalworld freebies
      -a to z tamil mp3 song download pagalworld giveaways
      -a to z tamil mp3 song download pagalworld contests
      -a to z tamil mp3 song download pagalworld surveys
      -a to z tamil mp3 song download pagalworld polls
      -a to z tamil mp3 song download pagalworld quizzes
      -a to z tamil mp3 song download pagalworld games
      -a to z tamil mp3 song download pagalworld fun

      -

      What are the benefits of downloading Tamil MP3 songs from Pagalworld?

      -

      There are many benefits of downloading Tamil MP3 songs from Pagalworld. Some of them are:

      -
        -
      • You can save money and time by downloading free songs instead of buying CDs or streaming online.
      • -
      • You can enjoy your favorite Tamil songs offline without any internet connection or data charges.
      • -
      • You can access a wide range of Tamil songs from different genres, artists, albums, and eras.
      • -
      • You can discover new Tamil songs and artists from Pagalworld's recommendations and ratings.
      • -
      • You can support the Tamil music industry by downloading legal and original songs from Pagalworld.
      • -
      -

      How to download A to Z Tamil MP3 songs from Pagalworld

      -

      Step 1: Visit the Pagalworld website or app

      -

      The first step to download A to Z Tamil MP3 songs from Pagalworld is to visit the website or app. You can use any browser or device to access the website. The website address is pagalworld.com. You can also download the app from pagalmovies.com. The app is compatible with Android devices and has more features than the website.

      -

      Step

      Step 2: Search for your favorite Tamil songs or browse by categories

      -

      The next step to download A to Z Tamil MP3 songs from Pagalworld is to search for your favorite Tamil songs or browse by categories. You can use the search bar on the top of the website or app to type the name of the song, artist, album, or movie. You can also use the filters and sorting options to narrow down your search results.

      -

      Alternatively, you can browse by categories such as A to Z Tamil MP3 Songs, Latest Tamil Songs, Top Tamil Songs, Tamil Albums, Tamil Movies, Tamil Singers, Tamil Genres, and more. You can find these categories on the left side of the website or app. You can also explore the featured and trending songs on the homepage.

      -

      Step 3: Select the song and click on the download button

      -

      The third step to download A to Z Tamil MP3 songs from Pagalworld is to select the song and click on the download button. Once you find the song you want to download, you can click on it to open its details page. Here, you can see the song name, artist name, album name, duration, size, format, quality, and rating. You can also listen to a preview of the song before downloading it.

      -

      To download the song, you need to click on the download button on the right side of the details page. This will open a new tab or window with a captcha code. You need to enter the captcha code correctly and click on submit to proceed. This is a security measure to prevent bots and spam from downloading songs.

      -

      Step 4: Choose the quality and format of the song and save it to your device

      -

      The final step to download A to Z Tamil MP3 songs from Pagalworld is to choose the quality and format of the song and save it to your device. After entering the captcha code, you will see a list of download links for different qualities and formats of the song. You can choose from 128 kbps, 192 kbps, 320 kbps, or HD quality songs. You can also choose from MP3, MP4, M4A, or other formats.

      -

      To download the song, you need to right-click on the download link and choose save link as or save target as option. This will prompt you to choose a location and name for the song file on your device. You can then click on save or ok to start downloading the song. The download speed and time will depend on your internet connection and device storage.

      -

      How to enjoy the downloaded Tamil MP3 songs from Pagalworld

      -

      Play them offline on any device or music player

      -

      One of the best ways to enjoy the downloaded Tamil MP3 songs from Pagalworld is to play them offline on any device or music player. You can transfer the song files from your device to your computer, laptop, tablet, smartphone, iPod, MP3 player, or any other device that supports audio playback. You can also use any music player app or software that supports your chosen format and quality of the song.

      -

      By playing them offline, you can enjoy your favorite Tamil songs anytime and anywhere without any internet connection or data charges. You can also adjust the volume, speed, pitch, equalizer, and other settings of your music player according to your preference.

      -

      Create playlists and share them with your friends

      -

      Another way to enjoy the downloaded Tamil MP3 songs from Pagalworld is to create playlists and share them with your friends. You can create playlists based on your mood, genre, artist, album, movie, or any other theme that suits you. You can also add songs from other sources besides Pagalworld to make your playlists more diverse and interesting.

      -

      To create playlists, you can use any music player app or software that allows you to organize your songs into folders or groups. You can also use online platforms such as Spotify, YouTube Music, SoundCloud, or others that allow you to create and share playlists with other users.

      -

      By creating playlists and sharing them with your friends,

      By creating playlists and sharing them with your friends, you can enjoy the downloaded Tamil MP3 songs from Pagalworld with more fun and excitement. You can also discover new Tamil songs and artists from your friends' playlists and recommendations. You can also exchange feedback and opinions on the songs and music with your friends.

      -

      Discover new Tamil songs and artists from Pagalworld's recommendations

      -

      A third way to enjoy the downloaded Tamil MP3 songs from Pagalworld is to discover new Tamil songs and artists from Pagalworld's recommendations. Pagalworld is not only a website that offers free downloads of songs, but also a platform that helps you explore the world of Tamil music. It has various features and sections that can help you find new Tamil songs and artists that match your taste and preference.

      -

      Some of these features and sections are:

      -
        -
      • The featured and trending songs on the homepage that showcase the most popular and latest Tamil songs on Pagalworld.
      • -
      • The ratings and reviews of the songs that show the opinions and feedback of other users who have downloaded and listened to the songs.
      • -
      • The related songs and artists that show the similar or relevant songs and artists based on the song or artist you have searched or downloaded.
      • -
      • The categories and filters that allow you to browse by different genres, albums, movies, singers, eras, and more.
      • -
      -

      By discovering new Tamil songs and artists from Pagalworld's recommendations, you can enjoy the downloaded Tamil MP3 songs from Pagalworld with more variety and diversity. You can also expand your knowledge and appreciation of Tamil music and culture.

      -

      Conclusion

      -

      In conclusion, downloading A to Z Tamil MP3 songs from Pagalworld is a great way to enjoy the latest Tamil music for free. You can download high-quality songs in different formats and sizes without any registration or subscription. You can also enjoy your favorite Tamil songs offline on any device or music player, create playlists and share them with your friends, and discover new Tamil songs and artists from Pagalworld's recommendations.

      -

      So, what are you waiting for? Visit pagalworld.com or download the app from pagalmovies.com today and start downloading A to Z Tamil MP3 songs from Pagalworld. You will surely have a blast listening to the best of Tamil music!

      -

      FAQs

      -

      Here are some frequently asked questions about downloading A to Z Tamil MP3 songs from Pagalworld:

      -
        -
      1. Is it legal to download A to Z Tamil MP3 songs from Pagalworld?
      2. -

        Yes, it is legal to download A to Z Tamil MP3 songs from Pagalworld as long as you use them for personal and non-commercial purposes. Pagalworld does not host any pirated or copyrighted content on its website or app. It only provides links to legal and original sources of the songs.

        -
      3. Is it safe to download A to Z Tamil MP3 songs from Pagalworld?
      4. -

        Yes, it is safe to download A to Z Tamil MP3 songs from Pagalworld as long as you follow some precautions. You should always scan the downloaded files for viruses or malware before opening them. You should also avoid clicking on any pop-ups or ads that may appear on the website or app. You should also use a VPN or proxy service to protect your privacy and identity online.

        -
      5. How can I request a song that is not available on Pagalworld?
      6. -

        If you want to request a song that is not available on Pagalworld, you can use the contact us form on the website or app. You can fill in your name, email address, subject, message, and captcha code. You can then submit your request and wait for a response from the Pagalworld team. They will try their best to fulfill your request as soon as possible.

        -
      7. How can I report a broken link or a wrong song on Pagalworld?
      8. -

        If you find a broken link or a wrong song on Pagalworld, you can report it using the report button on the details page of the song. You can also use the contact us form on the website or app. You can provide the details of the issue and submit your report. The Pagalworld team will fix the issue as soon as possible.

        -
      9. How can I support Pagalworld and the Tamil music industry?
      10. -

        If you want to support Pagalworld and the Tamil music industry, you can do so by downloading legal and original songs from Pagalworld. You can also share the website or app with your friends and family who love Tamil music. You can also follow Pagalworld on social media platforms such as Facebook, Twitter [assistant](#message) such as Facebook, Twitter, Instagram, and YouTube. You can also leave positive ratings and reviews for the songs and the website or app. You can also support the Tamil music industry by buying CDs or merchandise of your favorite Tamil artists or attending their concerts or events. You can also donate to charities or organizations that support the Tamil music industry and culture.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Aerofly FS 2021 APK OBB Experience the Thrill of Flying in 3D Graphics.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Aerofly FS 2021 APK OBB Experience the Thrill of Flying in 3D Graphics.md deleted file mode 100644 index 5438fd44234d5dcc9f69ae3f710b85ab26dfe027..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Aerofly FS 2021 APK OBB Experience the Thrill of Flying in 3D Graphics.md +++ /dev/null @@ -1,168 +0,0 @@ - -

        Aerofly FS 2021 APK + OBB: A Realistic Flight Simulator Game for Android

        -

        Introduction

        -

        If you are a fan of flight simulation games, you might have heard of Aerofly FS 2021. This is a game that offers you an immersive and realistic flying experience on your Android device. You can choose from a wide range of aircraft, from light planes to jets, and fly over stunning landscapes in different regions of the world. You can also customize your flight settings, such as weather, time, wind, and difficulty, to suit your skill level and preference.

        -

        In this article, we will tell you more about what Aerofly FS 2021 is, what are its features, how to download and install it on your Android device, and some tips and tricks for playing it. Let's get started!

        -

        aerofly fs 21 apk e obb


        Download ○○○ https://ssurll.com/2uNWZX



        -

        What is Aerofly FS 2021?

        -

        Aerofly FS 2021 is a flight simulator game developed by IPACS, a German company that specializes in creating realistic and high-quality simulation software. The game was first released for iOS devices in December 2020, and later for Android devices in March 2021. It is the latest installment in the Aerofly FS series, which started in 2012.

        -

        Aerofly FS 2021 is designed to give you a realistic and immersive flying experience on your mobile device. You can choose from over 20 aircraft models, including light planes, helicopters, jets, airliners, and military aircraft. You can also fly over various regions of the world, such as California, Nevada, Utah, Colorado, Switzerland, Germany, France, England, Scotland, Norway, Italy, Austria, and more. The game features high-resolution aerial images and detailed 3D buildings that make the scenery look stunning and lifelike.

        -

        Aerofly FS 2021 also allows you to customize your flight settings according to your preference. You can change the weather conditions, such as cloud cover, visibility, wind speed and direction, temperature, and precipitation. You can also adjust the time of day, from dawn to dusk, and watch the sun rise and set over the horizon. You can also choose the difficulty level of your flight, from easy to realistic, and enable or disable various assistance features, such as autopilot, navigation aids, landing aids, flight instructor, etc.

        -

        What are the features of Aerofly FS 2021?

        -

        Some of the main features of Aerofly FS 2021 are:

        -
          -
        • Over 20 aircraft models to choose from
        • -
        • Over 200 airports to land at
        • -
        • Over 300000 square miles of flyable area
        • -
        • High-resolution aerial images and detailed 3D buildings
        • -
        • Realistic physics and aerodynamics
        • -
        • Realistic cockpit instruments and displays
        • -
        • Realistic sound effects and engine noises
        • -
        • Customizable weather conditions and time of day
        • -
        • Customizable difficulty level and assistance features
        • -
        • Different modes and scenarios to explore
        • -

          How to download and install Aerofly FS 2021 APK + OBB on Android?

          -

          If you want to play Aerofly FS 2021 on your Android device, you will need to download and install two files: the APK file and the OBB file. The APK file is the application package that contains the game's code and resources, while the OBB file is the data file that contains the game's graphics and sounds. Here are the steps to download and install Aerofly FS 2021 APK + OBB on your Android device:

          -

          Step 1: Download the APK and OBB files from a trusted source

          -

          The first step is to download the APK and OBB files from a trusted source. You can find many websites that offer these files, but be careful not to download from malicious or fake sites that may harm your device or steal your data. One of the reliable sources that we recommend is [APKPure], which is a popular and safe platform for downloading Android apps and games.

          -

          aerofly fs 2021 full paid apk download
          -aerofly fs 2021 android game free download
          -aerofly fs 2021 realistic flight simulator
          -aerofly fs 2021 mod apk unlimited money
          -aerofly fs 2021 apk + data obb offline
          -aerofly fs 2021 latest version apk
          -aerofly fs 2021 best graphics settings
          -aerofly fs 2021 all planes unlocked
          -aerofly fs 2021 gameplay and review
          -aerofly fs 2021 system requirements
          -aerofly fs 2021 cheats and tips
          -aerofly fs 2021 new features and updates
          -aerofly fs 2021 how to install on android
          -aerofly fs 2021 download link and instructions
          -aerofly fs 2021 comparison with other simulators
          -aerofly fs 2021 support and feedback
          -aerofly fs 2021 premium apk for free
          -aerofly fs 2021 cracked apk no root
          -aerofly fs 2021 hack apk download
          -aerofly fs 2021 obb file location
          -aerofly fs 2021 apk mirror download
          -aerofly fs 2021 apk pure download
          -aerofly fs 2021 apk mod menu
          -aerofly fs 2021 apk rexdl download
          -aerofly fs 2021 apk revdl download
          -aerofly fs 2021 pro apk download
          -aerofly fs 2021 vip apk download
          -aerofly fs 2021 mega mod apk download
          -aerofly fs 2021 unlimited coins and gems
          -aerofly fs 2021 no ads apk download
          -aerofly fs 2021 high quality graphics mod
          -aerofly fs 2021 realistic physics and weather
          -aerofly fs 2021 custom liveries and airports
          -aerofly fs 2021 cockpit view and controls
          -aerofly fs 2021 multiplayer mode and online features
          -aerofly fs 2021 vr mode and compatibility
          -aerofly fs 2021 joystick support and configuration
          -aerofly fs 2021 flight school and tutorials
          -aerofly fs 2021 missions and challenges
          -aerofly fs 2021 achievements and rewards

          -

          To download Aerofly FS 2021 APK + OBB from APKPure, you need to visit their website and search for the game. You will see a page with the game's information and download links. You need to click on the "Download APK" button to download the APK file, which is about 60 MB in size. You also need to click on the "Download XAPK" button to download the OBB file, which is about 4 GB in size. You may need to use a download manager app or a browser that supports large file downloads to complete this step.

          -

          Step 2: Enable unknown sources on your device

          -

          The next step is to enable unknown sources on your device. This is a security setting that allows you to install apps and games from sources other than the Google Play Store. To enable unknown sources on your device, you need to go to your device's settings and look for the security or privacy option. There, you will find a toggle or checkbox that says "Allow installation of apps from unknown sources" or something similar. You need to turn it on or check it to enable unknown sources on your device.

          -

          Step 3: Install the APK file

          -

          The third step is to install the APK file that you have downloaded in step 1. To install the APK file, you need to locate it in your device's storage using a file manager app or a browser's download manager. You need to tap on the APK file and follow the instructions on the screen to install it. You may need to grant some permissions or accept some terms and conditions before installing it.

          -

          Step 4: Extract and copy the OBB folder to the Android/obb directory

          -

          The fourth step is to extract and copy the OBB folder that you have downloaded in step 1. The OBB folder is a compressed file that contains the game's data. To extract and copy the OBB folder, you need to use a file manager app that can handle zip files, such as [ZArchiver], which is a free and easy-to-use app for managing compressed files.

          -

          To extract and copy the OBB folder using ZArchiver, you need to open the app and locate the OBB file in your device's storage. You need to tap on the OBB file and select "Extract here" or "Extract to" option. You will see a folder named "com.aerofly.aeroflyfs2021" after extracting it. You need to copy this folder and paste it in the Android/obb directory in your device's storage. This directory is where all the OBB files of your installed apps and games are stored.

          -

          Step 5: Launch the game and enjoy

          -

          The final step is to launch the game and enjoy it. To launch the game, you need to go to your device's app drawer or home screen and look for the Aerofly FS 2021 icon. You need to tap on it and wait for it to load. You may need to verify your license or accept some terms and conditions before playing it.

          -

          Congratulations! You have successfully downloaded and installed Aerofly FS 2021 APK + OBB on your Android device. Now you can enjoy flying over various regions of the world with realistic graphics and physics.

          -

          Tips and tricks for playing Aerofly FS 2021

          -

          Choose a suitable aircraft and location

          -

          One of the first things you need to do before starting a flight is to choose a suitable aircraft and location. Aerofly FS 2021 offers you a wide range of aircraft models, each with its own characteristics and performance. You can choose from light planes, helicopters, jets, airliners, and military aircraft. You can also choose from various regions of the world, each with its own scenery and landmarks. You can select an aircraft and a location that match your skill level, interest, and mood.

          -

          For example, if you are a beginner, you may want to start with a light plane, such as the Cessna 172 or the Piper PA-28, and fly over a region that is not too crowded or complex, such as California or Switzerland. These aircraft are easy to control and maneuver, and these regions have clear skies and flat terrains. If you are an expert, you may want to challenge yourself with a jet, such as the Boeing 747 or the F-18, and fly over a region that is more busy or diverse, such as Germany or France. These aircraft are fast and powerful, and these regions have more traffic and weather variations.

          -

          Learn the basic controls and commands

          -

          Another thing you need to do before starting a flight is to learn the basic controls and commands. Aerofly FS 2021 uses touch controls and gestures to simulate the cockpit instruments and displays. You can also use tilt controls to steer your aircraft by tilting your device left or right. You can also use voice commands to interact with the game's features, such as changing the view, setting the autopilot, requesting clearance, etc.

          -

          Some of the basic controls and commands that you need to know are:

          -
            -
          • To control the throttle, slide your finger up or down on the right side of the screen.
          • -
          • To control the flaps, slide your finger up or down on the left side of the screen.
          • -
          • To control the rudder, slide your finger left or right on the bottom of the screen.
          • -
          • To control the brakes, tap on the bottom of the screen.
          • -
          • To control the landing gear, tap on the gear icon on the top right corner of the screen.
          • -
          • To switch between different views, tap on the eye icon on the top left corner of the screen.
          • -
          • To access the menu, tap on the three dots icon on the top right corner of the screen.
          • -
          • To access the map, tap on the map icon on the top left corner of the screen.
          • -
          • To access the flight instructor, tap on the headset icon on the top left corner of the screen.
          • -
          • To access the voice commands, tap on the microphone icon on the top right corner of the screen.
          • -
          -

          You can also customize your controls and commands according to your preference in the settings menu.

          -

          Adjust the graphics and sound settings according to your preference

          -

          Aerofly FS 2021 has stunning graphics and sound effects that make you feel like you are really flying. However, depending on your device's performance and battery life, you may want to adjust these settings according to your preference. You can do this in the settings menu under graphics and sound options.

          -

          Some of the graphics settings that you can adjust are:

          -
            -
          • The resolution of your display
          • -
          • The quality of your textures
          • -
          • The level of detail of your scenery
          • -
          • The density of your traffic
          • -
          • The amount of shadows and reflections
          • -
          -

          Some of the sound settings that you can adjust are:

          -
            -
          • The volume of your engine noise
          • -
          • The volume of your environment noise
          • -
          • The volume of your voice commands
          • -
          • The volume of your flight instructor
          • -
          -

          You can also enable or disable some features that may affect your graphics and sound quality, such as anti-aliasing, HDR rendering, cockpit vibration, etc.

          -

          Use the autopilot and navigation features to assist you

          - the cockpit or the menu of your aircraft. You can also use voice commands to activate or deactivate them.

          -

          Some of the autopilot features that you can use are:

          -
            -
          • The altitude hold, which maintains your current altitude
          • -
          • The heading hold, which maintains your current heading
          • -
          • The speed hold, which maintains your current speed
          • -
          • The vertical speed hold, which maintains your current vertical speed
          • -
          • The approach mode, which guides you to the runway for landing
          • -
          -

          Some of the navigation features that you can use are:

          -
            -
          • The GPS, which shows your position and route on the map
          • -
          • The VOR, which shows your direction and distance to a radio beacon
          • -
          • The ILS, which shows your alignment and glide slope to the runway
          • -
          • The NDB, which shows your direction and distance to a non-directional beacon
          • -
          • The DME, which shows your distance to a transponder station
          • -
          -

          Using these features can help you fly more accurately and efficiently, especially in bad weather or low visibility conditions. However, you should also be aware of their limitations and malfunctions, and be ready to take over manual control if needed.

          -

          Explore different modes and scenarios

          -

          Aerofly FS 2021 has different modes and scenarios that you can explore to make your flying experience more fun and varied. These modes and scenarios include:

          -
            -
          • The free flight mode, which lets you fly anywhere you want with no restrictions or objectives
          • -
          • The landing challenge mode, which tests your landing skills in different airports and conditions
          • -
          • The emergency mode, which simulates various emergency situations that require quick and correct actions
          • -
          • The aerobatic mode, which lets you perform thrilling stunts and maneuvers in the air
          • -
          • The multiplayer mode, which lets you fly with other players online and chat with them
          • -
          -

          Exploring these modes and scenarios can help you improve your flying skills, learn new things, and have more fun. You can also earn achievements and trophies for completing certain tasks and challenges.

          -

          Conclusion

          -

          Aerofly FS 2021 is a realistic and immersive flight simulator game for Android devices. It offers you a wide range of aircraft models, regions of the world, flight settings, and modes and scenarios to choose from. It also has stunning graphics and sound effects that make you feel like you are really flying. You can download and install Aerofly FS 2021 APK + OBB on your Android device by following the steps we have provided in this article. You can also use some tips and tricks we have shared to improve your flying experience. We hope you enjoy playing Aerofly FS 2021 and have a great time in the sky!

          -

          FAQs

          -

          Here are some frequently asked questions about Aerofly FS 2021:

          -
            -
          1. How much does Aerofly FS 2021 cost?
          2. -

            Aerofly FS 2021 is a paid game that costs $7.99 on the Google Play Store. However, you may be able to find it for free or at a discounted price on some websites or platforms that offer APK and OBB files.

            -
          3. Is Aerofly FS 2021 compatible with my device?
          4. -

            Aerofly FS 2021 requires Android 8.0 or higher and at least 4 GB of RAM to run smoothly. It also requires about 5 GB of storage space to install. You can check your device's specifications in the settings menu under system or about phone options.

            -
          5. Is Aerofly FS 2021 safe to download and install?
          6. -

            Aerofly FS 2021 is safe to download and install from the Google Play Store, as it has been verified by Google's security system. However, if you download it from other sources, you should be careful not to download from malicious or fake sites that may harm your device or steal your data. You should also scan the files with an antivirus app before installing them.

            -
          7. How can I contact the developers of Aerofly FS 2021?
          8. -[www.aerofly.com/community]. You can also check their FAQ page at [www.aerofly.com/support/faq] for more information and help.

            -
          9. What are some alternatives to Aerofly FS 2021?
          10. -

            If you are looking for some alternatives to Aerofly FS 2021, you may want to try some other flight simulator games for Android devices, such as:

            -
              -
            • [Infinite Flight Simulator], which is a realistic and multiplayer flight simulator game that offers over 80 aircraft models, over 20 regions of the world, and various flight settings and modes.
            • -
            • [X-Plane 10 Flight Simulator], which is a realistic and advanced flight simulator game that offers over 50 aircraft models, over 10 regions of the world, and various flight settings and modes.
            • -
            • [Flight Pilot Simulator 3D], which is a fun and casual flight simulator game that offers over 20 aircraft models, over 10 regions of the world, and various flight missions and challenges.
            • -
            -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Apk Messenger Lite Versi Lama Fitur dan Cara Instal.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Apk Messenger Lite Versi Lama Fitur dan Cara Instal.md deleted file mode 100644 index 73dce6ea20ce31e25722edbe465528dda5ae237e..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Apk Messenger Lite Versi Lama Fitur dan Cara Instal.md +++ /dev/null @@ -1,83 +0,0 @@ -
          -

          Download APK Messenger Lite Versi Lama: A Guide for Android Users

          -

          If you are looking for a simple and fast way to chat with your friends on Facebook, you might want to try Messenger Lite. Messenger Lite is a lightweight version of Facebook Messenger that offers the basic features of messaging, calling, and sending stickers. It is designed to work on low-end devices and slow internet connections. In this article, we will show you how to download APK Messenger Lite Versi Lama, which is an older version of the app that some users prefer. We will also explain the benefits and drawbacks of using Messenger Lite, and how to use it on your Android device.

          -

          Benefits of Messenger Lite

          -

          Messenger Lite has several advantages over the regular Messenger app. Here are some of them:

          -

          download apk messenger lite versi lama


          Download ►►► https://ssurll.com/2uNQCO



          -
            -
          • It is free to use and does not require any additional fees or subscriptions.
          • -
          • It has a neater user interface that is easy to navigate. You can also collapse stories from your contacts if you don't want to see them.
          • -
          • It takes up very little storage space on your device. The app size is only about 20 MB, compared to over 100 MB for the regular Messenger app.
          • -
          • You can still customize your chats with different colors, nicknames, and emojis. You can also send voice notes and photos.
          • -
          • It uses much less memory and data than the regular Messenger app. It is designed to function on 2G networks and in areas with poor or limited internet connectivity. It lets you do basic chatting seamlessly and still manages to offer decent features such as emojis.
          • -
          -

          Drawbacks of Messenger Lite

          -

          Messenger Lite also has some limitations that you should be aware of before downloading it. Here are some of them:

          -
            -
          • It does not support video calls and secret conversations. If you want to make video calls or send encrypted messages, you will need to use the regular Messenger app or another app that offers these features.
          • -
          • It does not support GIFs, animations, stories, games, and bots. If you enjoy using these features on the regular Messenger app, you will miss them on Messenger Lite.
          • -
          -

          How to Download APK Messenger Lite Versi Lama

          -

          If you want to download APK Messenger Lite Versi Lama, which is an older version of the app that some users prefer, you will need to follow these steps:

          -
            -
          1. Allow unknown apps on your Android settings. To do this, go to your device settings and tap Apps & Notifications (or Apps in older versions of Android). Tap the three dots in the upper-right corner and select Special App Access. Tap Install Unknown Apps and select the browser or app that you will use to download the APK file. Turn on the Allow from this source switch.
          2. -
          3. Install a file manager app on your device. You will need this app to locate the APK file that you will download. You can use any file manager app that you like, such as ES File Explorer, Astro File Manager, or Files by Google.
          4. -
          5. Download the APK file from a reputable website such as APKPure.com or transfer it from your computer via USB. You can find the APK Messenger Lite Versi Lama file by searching for it on the website or by browsing through the categories. Make sure to download the correct version that is compatible with your device and Android system. Alternatively, you can transfer the APK file from your computer to your device using a USB cable. Just connect your device to your computer and copy the APK file to a folder on your device.
          6. -
          7. Open the APK file and install it. To do this, go to your file manager app and locate the APK file that you downloaded or transferred. Tap on it and follow the instructions on the screen to install it. You may need to grant some permissions to the app before installing it.
          8. -
          -

          How to Use Messenger Lite

          -

          Once you have installed Messenger Lite on your device, you can start using it to chat with your friends on Facebook. Here are some tips on how to use it:

          -
            -
          • Sign in with your Facebook account or phone number. When you open Messenger Lite for the first time, you will be asked to sign in with your Facebook account or phone number. If you choose to sign in with your Facebook account, you will need to enter your email or phone number and password. If you choose to sign in with your phone number, you will need to enter your country code and phone number and verify it with a code that will be sent to you via SMS.
          • -
          • Swipe right or left to toggle between home, contacts, and profile tabs. On the home tab, you will see all your active chats and message requests. On the contacts tab, you will see all your friends who are online or have Messenger Lite installed. On the profile tab, you will see your name, photo, status, and settings.
          • -
          • Tap the compose button to start a new chat or call. On the bottom-right corner of the screen, you will see a blue compose button with a plus sign. Tap on it and you will be able to choose whether you want to start a new chat or a new call. You can also search for a contact by typing their name or phone number in the search bar at the top of the screen.
          • -
          • Tap the menu button to access settings, notifications, message requests, and more. On the top-right corner of the screen, you will see a three-dot menu button. Tap on it and you will be able to access various options such as settings, notifications, message requests, archived chats, blocked contacts, help, and feedback.
          • -
          -

          Conclusion

          -

          Messenger Lite is a great alternative to the regular Messenger app if you want a simple and fast way to chat with your friends on Facebook. It offers the basic features of messaging, calling, and sending stickers without consuming too much memory and data. However, it also has some drawbacks such as no video calls and secret conversations, no GIFs and animations, no stories and games, and no bots. If you want to download APK Messenger Lite Versi Lama, which is an older version of the app that some users prefer, you can follow our guide above and enjoy using it on your Android device.

          -

          We hope this article was helpful for you. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!

          -

          Frequently Asked Questions

          -
            -
          1. What is APK Messenger Lite Versi Lama?
          2. -

            APK Messenger Lite Versi Lama is an older version of Messenger Lite, which is a lightweight version of Facebook Messenger that offers the basic features of messaging, calling, and sending stickers.

            -
          3. Why would I want to download APK Messenger Lite Versi Lama?
          4. -

            You might want to download APK Messenger Lite Versi Lama if you prefer its user interface or features over the newer versions of Messenger Lite. Some users also claim that it works better on their devices or internet connections.

            -
          5. How can I download APK Messenger Lite Versi Lama?
          6. -

            You can download APK Messenger Lite Versi Lama by allowing unknown apps on your Android settings, installing a file manager app on your device, downloading the APK file from a reputable website such as APKPure.com or transferring it from your computer via USB, and opening the APK file and installing it.

            -

            Download apk messenger lite old version
            -Download apk messenger lite versi terdahulu
            -Download apk messenger lite previous version
            -Download apk messenger lite versi sebelumnya
            -Download apk messenger lite older version
            -Download apk messenger lite versi lawas
            -Download apk messenger lite outdated version
            -Download apk messenger lite versi kuno
            -Download apk messenger lite last version
            -Download apk messenger lite versi akhir
            -Download apk messenger lite latest version
            -Download apk messenger lite versi terbaru
            -Download apk messenger lite update version
            -Download apk messenger lite versi perbarui
            -Download apk messenger lite new version
            -Download apk messenger lite versi baru
            -Download apk messenger lite original version
            -Download apk messenger lite versi asli
            -Download apk messenger lite official version
            -Download apk messenger lite versi resmi
            -Download apk messenger lite mod version
            -Download apk messenger lite versi modifikasi
            -Download apk messenger lite pro version
            -Download apk messenger lite versi profesional
            -Download apk messenger lite premium version
            -Download apk messenger lite versi premium
            -Download apk messenger lite free version
            -Download apk messenger lite versi gratis
            -Download apk messenger lite full version
            -Download apk messenger lite versi lengkap

            -
          7. How can I use Messenger Lite?
          8. -

            You can use Messenger Lite by signing in with your Facebook account or phone number, swiping right or left to toggle between home, contacts, and profile tabs, tapping the compose button to start a new chat or call, and tapping the menu button to access settings, notifications, message requests, and more.

            -
          9. What are the benefits and drawbacks of Messenger Lite?
          10. -

            Messenger Lite has several benefits such as being free to use, having a neater user interface, taking up very little storage space, using much less memory and data, and still offering some customization options. However, it also has some drawbacks such as not supporting video calls and secret conversations, not supporting GIFs and animations, not supporting stories and games, and not supporting bots.

            -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Play Clash of Clans on PC A Guide for Emulator Users.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Play Clash of Clans on PC A Guide for Emulator Users.md deleted file mode 100644 index 4d979a1b56654bb3ed46f4ba93c80f7cf248960b..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Play Clash of Clans on PC A Guide for Emulator Users.md +++ /dev/null @@ -1,141 +0,0 @@ - -

          How to Download APK Clash of Clans for PC

          -

          Clash of Clans is one of the most popular and addictive mobile games in the world. It has millions of players who build their villages, raise their clans, and compete in epic clan wars. But did you know that you can also play Clash of Clans on your PC? In this article, we will show you how to download APK Clash of Clans for PC using an Android emulator. This way, you can enjoy the game on a bigger screen, with better controls, and without draining your phone's battery.

          -

          What is Clash of Clans?

          -

          A brief introduction to the game and its features

          -

          Clash of Clans is a strategy game developed by Supercell, a Finnish company that also created other hit games like Clash Royale, Brawl Stars, Boom Beach, and Hay Day. The game was released in 2012 for iOS devices and in 2013 for Android devices. Since then, it has received countless updates and new features that keep it fresh and exciting.

          -

          download apk clash of clans for pc


          Downloadhttps://ssurll.com/2uO1p7



          -

          In Clash of Clans, you start with a small village that you have to expand and upgrade by collecting resources, building structures, training troops, and researching technologies. You can also join or create a clan with other players from around the world and participate in clan wars, clan games, clan war leagues, and special events. You can also challenge other players in friendly battles or in the legend league.

          -

          The game offers a variety of troops, spells, heroes, siege machines, defenses, traps, walls, and sceneries that you can use to customize your village and your army. You can also explore different worlds such as the home village, the builder base, the clan capital district, and the goblin map.

          -

          Why play Clash of Clans on PC?

          -

          Advantages of playing on a larger screen, using keyboard and mouse, and saving battery life

          -

          While Clash of Clans is designed for mobile devices, there are many reasons why you might want to play it on your PC. Here are some of them:

          -
            -
          • You can enjoy the game's graphics and animations on a larger screen.
          • -
          • You can use your keyboard and mouse for more precise and comfortable controls.
          • -
          • You can avoid interruptions from phone calls, messages, notifications, or low battery.
          • -
          • You can save your phone's battery life and storage space.
          • -
          • You can multitask with other apps or programs on your PC.
          • -
          -

          What is an APK file?

          -

          A definition and explanation of the file format used by Android apps

          -

          An APK file stands for Android Package Kit or Android Application Package. It is a file format used by the Android operating system for distributing and installing mobile applications. An APK file contains all the components that an app needs to run properly on an Android device, such as code files, resources files, assets files, certificates files, and manifest file. An APK file has the extension .apk and can be downloaded from various sources, such as the Google Play Store, third-party websites, or directly from the app developer.

          -

          How to install APK files on Android devices

          -

          Enabling unknown sources and using a file manager

          -

          To install an APK file on your Android device, you need to do two things: enable unknown sources and use a file manager. Here are the steps:

          -

          How to download apk clash of clans for pc without emulator
          -Download apk clash of clans for pc windows 10 free
          -Download apk clash of clans for pc offline installer
          -Download apk clash of clans for pc latest version 2023
          -Download apk clash of clans for pc from official website
          -Download apk clash of clans for pc with bluestacks
          -Download apk clash of clans for pc full game
          -Download apk clash of clans for pc noxplayer
          -Download apk clash of clans for pc ldplayer
          -Download apk clash of clans for pc supercell
          -Download apk clash of clans for pc hack mod
          -Download apk clash of clans for pc unlimited gems
          -Download apk clash of clans for pc update 15.83.24
          -Download apk clash of clans for pc best settings
          -Download apk clash of clans for pc tutorial guide
          -Download apk clash of clans for pc gameplay video
          -Download apk clash of clans for pc tips and tricks
          -Download apk clash of clans for pc review and rating
          -Download apk clash of clans for pc system requirements
          -Download apk clash of clans for pc error fix
          -Download apk clash of clans for pc new features
          -Download apk clash of clans for pc town hall 15
          -Download apk clash of clans for pc builder base 10
          -Download apk clash of clans for pc clan wars 3
          -Download apk clash of clans for pc friendly wars
          -Download apk clash of clans for pc heroes upgrade
          -Download apk clash of clans for pc troops and spells
          -Download apk clash of clans for pc strategies and tactics
          -Download apk clash of clans for pc base design and layout
          -Download apk clash of clans for pc farming and loot
          -Download apk clash of clans for pc events and challenges
          -Download apk clash of clans for pc season pass and rewards
          -Download apk clash of clans for pc clan games and perks
          -Download apk clash of clans for pc cwl and trophies
          -Download apk clash of clans for pc legends league and rank
          -Download apk clash of clans for pc global chat and friends
          -Download apk clash of clans for pc support and feedback
          -Download apk clash of clans for pc news and updates
          -Download apk clash of clans for pc forum and community
          -Download apk clash of clans for pc reddit and discord

          -
            -
          1. Go to your device's settings and look for the security or privacy option.
          2. -
          3. Find the option that says "Unknown sources" or "Install unknown apps" and toggle it on. This will allow you to install apps from sources other than the Google Play Store.
          4. -
          5. Download the APK file that you want to install from a trusted source and save it to your device's storage.
          6. -
          7. Open a file manager app on your device and locate the APK file that you downloaded.
          8. -
          9. Tap on the APK file and follow the instructions to install it.
          10. -
          -

          Note: Some devices may have different steps or names for these options, so check your device's manual or online support for more details.

          -

          What is an Android emulator?

          -

          A software that simulates an Android device on a PC or Mac

          -

          An Android emulator is a software that allows you to run Android apps and games on your PC or Mac. It creates a virtual environment that mimics an Android device, with its own operating system, hardware, and apps. You can use an Android emulator to test your own apps, play games that are not available for your device, or access apps that are restricted in your region.

          -

          How to choose and install an Android emulator

          -

          Some popular and reliable options such as BlueStacks, LDPlayer, NoxPlayer, etc.

          -

          There are many Android emulators available for PC and Mac, but not all of them are compatible with Clash of Clans or offer the same features and performance. Here are some of the most popular and reliable options that you can choose from:

          - - - - - -
          NameDescriptionDownload Link
          BlueStacksOne of the oldest and most widely used Android emulators. It has a simple and user-friendly interface, supports multiple languages, and offers advanced features such as keyboard mapping, game mode, multi-instance, etc.
          LDPlayerA fast and lightweight Android emulator that focuses on gaming performance. It has a smooth and stable gameplay experience, supports high graphics settings, and offers features such as keyboard mapping, macro recording, multi-instance, etc.
          NoxPlayerA powerful and versatile Android emulator that supports both Windows and Mac. It has a customizable and intuitive interface, supports multiple languages, and offers features such as keyboard mapping, game mode, multi-instance, etc.
          -

          To install an Android emulator on your PC or Mac, you need to follow these steps:

          -
            -
          1. Go to the official website of the emulator that you want to use and download its installer file.
          2. -
          3. Run the installer file and follow the instructions to install the emulator on your PC or Mac.
          4. -
          5. Launch the emulator and sign in with your Google account or create a new one.
          6. -
          7. Access the Google Play Store or any other app store within the emulator and download any apps or games that you want to use.
          8. -
          -

          How to download and play APK Clash of Clans for PC

          -

          A step-by-step guide with screenshots

          -

          Downloading the APK file from a trusted source

          -

          To download APK Clash of Clans for PC, you need to find a trusted source that offers the latest version of the game. You can use websites such as APKPure, APKMirror, or Uptodown to download the APK file. Here are the steps:

          -
            -
          1. Open your web browser on your PC or Mac and go to one of these websites.
          2. -
          3. Type "Clash of Clans" in the search box and hit enter.
          4. -
          5. Select the game from the list of results and click on the download button.
          6. -
          7. Choose a download location on your PC or Mac and save the APK file.Wait for the download to finish and check the APK file on your PC or Mac.
          8. -
          -

          Note: Make sure that you download the APK file from a reputable and secure source, as some websites may contain malware or viruses that can harm your PC or Mac.

          -

          Launching the Android emulator and importing the APK file

          -

          To play APK Clash of Clans for PC, you need to launch the Android emulator that you installed and import the APK file that you downloaded. Here are the steps:

          -
            -
          1. Open the Android emulator on your PC or Mac and sign in with your Google account if you haven't already.
          2. -
          3. Look for the option that says "Install APK", "APK Installer", or something similar. It may be located on the home screen, the toolbar, the menu, or the settings of the emulator.
          4. -
          5. Click on the option and browse your PC or Mac for the APK file that you downloaded.
          6. -
          7. Select the APK file and click on "Open" or "OK" to import it to the emulator.
          8. -
          9. Wait for the emulator to install the APK file and show a confirmation message.
          10. -
          -

          Installing and running the game on the emulator

          -

          To install and run APK Clash of Clans for PC, you need to do the following steps:

          -
            -
          1. Go to the app drawer or the home screen of the emulator and look for the Clash of Clans icon.
          2. -
          3. Click on the icon to launch the game and wait for it to load.
          4. -
          5. Follow the instructions on the screen to set up your account, choose your name, and join a clan.
          6. -
          7. Enjoy playing Clash of Clans on your PC with a larger screen and better controls.
          8. -
          -

          Conclusion

          -

          A summary of the main points and a call to action

          -

          In this article, we have shown you how to download APK Clash of Clans for PC using an Android emulator. We have explained what Clash of Clans is, what an APK file is, what an Android emulator is, and how to use them to play your favorite game on your PC or Mac. We have also provided you with a step-by-step guide with screenshots to help you along the way.

          -

          Now that you know how to download APK Clash of Clans for PC, you can enjoy the game on a bigger screen, with better controls, and without draining your phone's battery. You can also access more features and options that are not available on your mobile device. You can also play with your friends and clanmates who are using different platforms.

          -

          So what are you waiting for? Download APK Clash of Clans for PC today and start building your village, raising your clan, and competing in epic clan wars. And don't forget to share this article with your fellow Clashers who might want to try this method too.

          -

          FAQs

          -

          Some common questions and answers about the topic

          -
            -
          • Q: Is it safe to download APK Clash of Clans for PC?
          • -
          • A: Yes, as long as you download it from a trusted source and use a reliable Android emulator. However, you should always be careful when downloading any files from unknown sources and scan them for malware or viruses before installing them.
          • -
          • Q: Is it legal to download APK Clash of Clans for PC?
          • -
          • A: Yes, as long as you don't violate any terms of service or policies of Supercell, the developer of Clash of Clans. You should also respect the intellectual property rights of Supercell and not modify, distribute, or sell their game without their permission.
          • -
          • Q: Will I lose my progress or account if I download APK Clash of Clans for PC?
          • -
          • A: No, as long as you link your account to your Google account or Supercell ID. This way, you can sync your progress and data across different devices and platforms. You can also switch between your mobile device and your PC anytime without losing anything.
          • -
          • Q: Can I play with other players who are using different platforms if I download APK Clash of Clans for PC?
          • -
          • A: Yes, you can play with other players who are using Android devices, iOS devices, or PCs as long as they are using the same version of Clash of Clans. You can also join or create clans with them and participate in clan wars, clan games, clan war leagues, and special events.
          • -
          • Q: Which Android emulator is best for playing APK Clash of Clans for PC?
          • -
          • A: There is no A: There is no definitive answer to this question, as different emulators may have different features, performance, compatibility, and user preferences. However, some of the most popular and reliable options that we recommend are BlueStacks, LDPlayer, and NoxPlayer. You can try them out and see which one suits you best.
          • -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience the Magic of Genshin Impact with the Latest APK for Android.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience the Magic of Genshin Impact with the Latest APK for Android.md deleted file mode 100644 index b07f33e30e93c0ee8fb0c07e326e5334f5ef8c32..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience the Magic of Genshin Impact with the Latest APK for Android.md +++ /dev/null @@ -1,162 +0,0 @@ - -

          Genshin Impact Android APK: How to Download and Play the Epic Open-World RPG

          -

          If you are a fan of open-world RPGs, you might have heard of Genshin Impact, one of the most popular and acclaimed games of 2020. Genshin Impact is a free-to-play game that offers a stunning and immersive experience that rivals some of the AAA titles in the genre. And the best part is, you can play it on your Android device as well. In this article, we will tell you everything you need to know about Genshin Impact Android APK, including what it is, why you should play it, how to download and install it, and some tips and tricks to enjoy it.

          -

          genshin impact android apk


          Download Filehttps://ssurll.com/2uNVRj



          -

          What is Genshin Impact?

          -

          Genshin Impact is a fantasy action RPG developed by miHoYo, a Chinese studio that also created Honkai Impact 3rd. The game was released in September 2020 for Windows, PlayStation 4, iOS, and Android platforms, and later for PlayStation 5 in April 2021. It is also planned to be released for Nintendo Switch in the future.

          -

          A brief introduction to the game and its features

          -

          Genshin Impact is a game that combines elements of exploration, combat, questing, crafting, and gacha mechanics. The game allows you to explore a vast open world called Teyvat, where you can interact with various characters, enemies, items, and secrets. You can also switch between different characters that have different abilities and elemental affinities, creating dynamic and strategic combat scenarios. The game also features a gacha system, where you can spend in-game currency or real money to obtain new characters, weapons, and items. The game is constantly updated with new content, events, and features, making it an ever-evolving experience.

          -

          The story and the world of Teyvat

          -

          The game starts with you and your sibling arriving in Teyvat from another world. However, you are separated by an unknown god who takes away your powers and puts you into a deep slumber. You wake up centuries later in a world that is very different from when you first arrived. You then embark on a journey across Teyvat to seek answers from The Seven, the gods of each element, and to find your lost sibling.

          -

          Teyvat is a world that is divided into seven regions, each ruled by a different god and corresponding to a different element. These regions are Mondstadt (Anemo), Liyue (Geo), Inazuma (Electro), Sumeru (Dendro), Fontaine (Hydro), Natlan (Pyro), and Snezhnaya (Cryo). So far, only Mondstadt and Liyue are available in the game, with Inazuma coming soon in the next update. Each region has its own culture, history, landscape, wildlife, and secrets to discover.

          -

          The gameplay and the combat system

          -

          The gameplay of Genshin Impact is mainly focused on exploration and combat. You can freely roam around the world of Teyvat, climbing mountains, swimming across rivers, gliding over the sky, and finding hidden treasures. You can also interact with various NPCs, accept quests, join events, and explore dungeons. You can also collect various resources, such as plants, ores, food, and artifacts, that can be used for crafting, cooking, upgrading, and enhancing your characters and equipment.

          -

          The combat system of Genshin Impact is based on the interaction of different elements. You can have up to four characters in your party, each with their own elemental skills and bursts. You can switch between them at any time during combat, creating combos and reactions that deal extra damage or produce various effects. For example, you can use a Hydro character to wet an enemy, then switch to a Cryo character to freeze them, then switch to a Claymore user to shatter them. The game also features a stamina system that limits your actions, such as sprinting, climbing, swimming, and gliding.

          -

          Why should you play Genshin Impact on Android?

          -

          Genshin Impact is a game that can be enjoyed on various platforms, but there are some reasons why you might want to play it on your Android device. Here are some of them:

          -

          genshin impact android apk download
          -genshin impact android apk obb
          -genshin impact android apk mod
          -genshin impact android apk latest version
          -genshin impact android apk size
          -genshin impact android apk offline
          -genshin impact android apk update
          -genshin impact android apk free
          -genshin impact android apk full
          -genshin impact android apk english
          -genshin impact android apk reddit
          -genshin impact android apk no verification
          -genshin impact android apk data
          -genshin impact android apk mirror
          -genshin impact android apk pure
          -genshin impact android apk revdl
          -genshin impact android apk hack
          -genshin impact android apk gameplay
          -genshin impact android apk requirements
          -genshin impact android apk beta
          -genshin impact android apk google play
          -genshin impact android apk mihoyo
          -genshin impact android apk uptodown
          -genshin impact android apk 1.6.0
          -genshin impact android apk 1.7.0
          -genshin impact android apk 1.8.0
          -genshin impact android apk 1.9.0
          -genshin impact android apk 2.0.0
          -genshin impact android apk 2.1.0
          -genshin impact android apk 2.2.0
          -genshin impact android apk 2.3.0
          -genshin impact android apk 2.4.0
          -genshin impact android apk 2.5.0
          -genshin impact android apk 2.6.0
          -genshin impact android apk 2.7.0
          -genshin impact android apk 2.8.0
          -genshin impact android apk 2.9.0
          -genshin impact android apk 3.0.0
          -genshin impact android apk direct link
          -genshin impact android apk highly compressed
          -genshin impact android apk low end device
          -genshin impact android apk original file
          -genshin impact android apk without obb file
          -genshin impact android apk with controller support
          -genshin impact android apk for pc
          -genshin impact android apk for tablet
          -genshin impact android apk for emulator
          -genshin impact android apk for bluestacks
          -genshin impact android apk for nox player

          -

          The benefits of playing on mobile devices

          -

          One of the main benefits of playing Genshin Impact on Android is the convenience and portability. You can play the game anytime and anywhere, as long as you have a stable internet connection and enough battery life. You don't need to worry about booting up your PC or console, or finding a TV or monitor to play on. You can also play the game in short bursts or long sessions, depending on your preference and schedule.

          -

          Another benefit of playing on Android is the touch-screen controls. Some players might prefer the tactile feedback and precision of using a mouse and keyboard or a controller, but others might find the touch-screen controls more intuitive and comfortable. You can also customize the layout and sensitivity of the touch-screen controls to suit your preferences.

          -

          The compatibility and the performance of the game on Android

          -

          Genshin Impact is a game that requires a lot of resources and processing power to run smoothly. However, the developers have done a great job of optimizing the game for Android devices, making it compatible with a wide range of models and specifications. You can check the official website for the minimum and recommended requirements for Android devices. The game also has an auto-adjust feature that detects your device's performance and adjusts the graphics settings accordingly.

          -

          Of course, the performance of the game on Android will depend on your device's hardware and software, as well as your internet connection and network settings. You might experience some lag, stuttering, or crashes if your device is not powerful enough or if your connection is unstable. However, these issues can be minimized or resolved by following some tips and tricks that we will share later in this article.

          -

          The cross-platform and cross-save features of the game

          -

          One of the best features of Genshin Impact is that it supports cross-platform and cross-save functionality. This means that you can play the game with other players who are using different platforms, such as Windows, PlayStation 4, PlayStation 5, iOS, or Android. You can also switch between different platforms without losing your progress or data, as long as you use the same miHoYo account. This gives you more flexibility and freedom to enjoy the game however you want.

          -

          How to download and install Genshin Impact APK on Android?

          -

          Now that you know what Genshin Impact is and why you should play it on Android, you might be wondering how to download and install it on your device. There are two ways to do this: the official way and the alternative way.

          -

          The official way to get the game from Google Play Store

          -

          The official way to get Genshin Impact APK on Android is to download it from Google Play Store. This is the easiest and safest way to get the game, as you don't need to worry about compatibility issues or malware risks. Here are the steps to follow:

          -
            -
          1. Open Google Play Store on your device and search for "Genshin Impact".
          2. -
          3. Select the game from the search results and tap on "Install".
          4. -
          5. Wait for the game to download and install on your device. The game size is about 6 GB, so make sure you have enough storage space and a stable internet connection.
          6. -
          7. Once the installation is complete, tap on "Open" to launch the game.
          8. -
          9. Follow the instructions on the screen to create or log in to your miHoYo account and start playing.
          10. -

          The alternative way to get the game from APKCombo

          -

          The alternative way to get Genshin Impact APK on Android is to download it from APKCombo, a third-party website that offers APK files for various apps and games. This is a more complicated and risky way to get the game, as you might encounter compatibility issues or malware risks. However, some players might prefer this way if they have problems with Google Play Store or if they want to access the game from a different region. Here are the steps to follow:

          -
            -
          1. Open your device's browser and go to https://apkcombo.com/genshin-impact/com.miHoYo.GenshinImpact/.
          2. -
          3. Select the version of the game that you want to download and tap on "Download APK".
          4. -
          5. Wait for the game to download on your device. The game size is about 6 GB, so make sure you have enough storage space and a stable internet connection.
          6. -
          7. Once the download is complete, go to your device's settings and enable the option to install apps from unknown sources. This might vary depending on your device model and Android version, but you can usually find it under Security or Privacy settings.
          8. -
          9. Go to your device's file manager and locate the downloaded APK file. Tap on it to install it on your device.
          10. -
          11. Once the installation is complete, tap on "Open" to launch the game.
          12. -
          13. Follow the instructions on the screen to create or log in to your miHoYo account and start playing.
          14. -
          -

          Tips and tricks to enjoy Genshin Impact on Android

          -

          Genshin Impact is a game that can be enjoyed on Android devices, but there are some tips and tricks that can help you optimize your experience and avoid some common issues. Here are some of them:

          -

          How to optimize the game settings for your device

          -

          Genshin Impact is a game that has high graphics quality and requires a lot of resources to run smoothly. However, not all Android devices have the same specifications and performance capabilities. Therefore, you might need to adjust the game settings to suit your device's capabilities and preferences. Here are some steps to follow:

          -
            -
          1. Launch the game and tap on the menu icon on the top left corner of the screen.
          2. -
          3. Select "Settings" from the menu and then select "Graphics".
          4. -
          5. You will see various options for graphics settings, such as resolution, frame rate, render quality, shadows, anti-aliasing, etc. You can either choose one of the preset options (Low, Medium, High, Custom) or customize each option individually.
          6. -
          7. You can also enable or disable some features, such as motion blur, depth of field, bloom, etc., depending on your preference.
          8. -
          9. Once you are satisfied with your settings, tap on "Confirm" to save them.
          10. -
          -

          You can also test your settings by tapping on "Test Run" at the bottom of the screen. This will launch a short gameplay session where you can check how your settings affect the game's performance and appearance. You can also change your settings during the test run by tapping on "Adjust Settings" at the top right corner of the screen.

          -

          How to manage your resources and progress in the game

          -

          Genshin Impact is a game that has a lot of content and features that can keep you busy for hours. However, it is also a game that has some limitations and restrictions that can affect your resources and progress in the game. Here are some tips to help you manage them:

          -
            -
          • Be aware of your resin system. Resin is a resource that is used to claim rewards from certain activities in the game, such as domains, bosses, ley lines, etc. You have a maximum of 160 resin that regenerates at a rate of 1 per 8 minutes. You can also use items or real money to replenish your resin. However, resin is a scarce and valuable resource that should be used wisely and efficiently. You should prioritize using your resin for activities that give you the most benefits for your current goals and needs.
          • -
          • Be smart about your gacha system. Gacha is a system where you can spend in-game currency or real money to obtain new characters, weapons, and items. You have two types of currency: primogems and genesis crystals. Primogems are earned by playing the game, while genesis crystals are purchased with real money. You can use primogems or genesis crystals to buy intertwined fates or acquaint fates, which are used for different types of gacha banners. However, gacha is a system that relies on luck and probability, so you might not always get what you want or need. You should save up your currency for banners that have characters or weapons that you really want or need, and avoid spending too much on banners that have low chances of giving you good results. You should also take advantage of the pity system, which guarantees you a 4-star or higher item after a certain number of pulls.
          • -
          • Be careful about your inventory system. Inventory is a system where you can store and manage your items, such as weapons, artifacts, materials, food, etc. You have a limited amount of inventory space, which can be expanded by using items or real money. However, inventory space is a precious and limited resource that should be used wisely and efficiently. You should avoid hoarding items that you don't need or use, and sell or discard them to free up some space. You should also organize your items by categories and filters, and use the lock feature to prevent accidentally selling or enhancing important items.
          • -
          -

          How to join co-op mode and play with friends

          -

          Genshin Impact is a game that can be played solo or with friends. Co-op mode is a feature that allows you to team up with other players online and explore the world of Teyvat together. You can join co-op mode with up to three other players, either by inviting them from your friends list, requesting to join their world, or using the matchmaking system. However, co-op mode has some limitations and restrictions that you should be aware of. Here are some tips to help you enjoy co-op mode:

          -
            -
          • Be respectful and cooperative with your teammates. Co-op mode is a feature that is meant to enhance your gameplay experience and make it more fun and social. However, it also requires some communication and coordination with your teammates, especially for challenging activities such as domains and bosses. You should respect your teammates' preferences and decisions, and avoid doing things that might annoy or harm them, such as stealing their resources, triggering their enemies, or leaving the session abruptly.
          • -
          • Be prepared and adaptable for different situations. Co-op mode is a feature that can create different situations and scenarios that might not occur in solo mode. For example, you might encounter enemies that are stronger or weaker than your usual level, or you might have to adjust your team composition and strategy according to your teammates' characters and elements. You should be prepared and adaptable for these situations, and have a variety of characters and equipment that can suit different needs and roles.
          • -
          • Be aware of the co-op rules and limitations. Co-op mode is a feature that has some rules and limitations that might affect your gameplay experience. For example, you can only join co-op mode after reaching Adventure Rank 16, you can only access certain areas and quests in co-op mode, you can only use characters that are not already in use by your teammates, you can only claim rewards from certain activities once per day in co-op mode, etc. You should be aware of these rules and limitations, and check the official website or the in-game guide for more details.
          • -
          -

          Conclusion

          -

          Genshin Impact is a game that offers an epic open-world RPG experience that can be enjoyed on various platforms, including Android devices. In this article, we have covered what Genshin Impact is, why you should play it on Android, how to download and install it on your device, and some tips and tricks to enjoy it. We hope that this article has been helpful and informative for you, and that you will have a great time playing Genshin Impact on Android.

          -

          FAQs

          -

          Here are some frequently asked questions about Genshin Impact Android APK:

          -
            -
          1. Is Genshin Impact free to play on Android?
          2. -

            Yes, Genshin Impact is free to play on Android devices. You can download and install the game from Google Play Store or APKCombo without paying any money. However, the game does have some optional in-game purchases that can enhance your gameplay experience.

            -
          3. Is Genshin Impact safe to play on Android?
          4. -

            Yes, Genshin Impact is safe to play on Android devices. The game does not contain any viruses or malware that can harm your device or data. However, you should always download the game from official or trusted sources, such as Google Play Store or APKCombo, and avoid downloading from unknown or suspicious websites.

            -
          5. How much storage space does Genshin Impact require on Android?
          6. -

            Genshin Impact requires about 6 GB of storage space on Android devices. However, this might vary depending on the updates and patches that the game receives over time. You should always check the game size before downloading it, and make sure you have enough storage space on your device.

            -
          7. How do I update Genshin Impact on Android?
          8. -

            Genshin Impact is a game that receives regular updates and patches that add new content, features, and improvements to the game. You can update the game on Android devices by following these steps:

            -
              -
            1. Open Google Play Store on your device and search for "Genshin Impact".
            2. -
            3. Select the game from the search results and tap on "Update".
            4. -
            5. Wait for the game to download and install the latest version on your device.
            6. -
            7. Once the update is complete, tap on "Open" to launch the game.
            8. -
            9. Follow the instructions on the screen to download any additional data or resources that the game might require.
            10. -
            -

            If you downloaded the game from APKCombo, you can update the game by following these steps:

            -
              -
            1. Open your device's browser and go to https://apkcombo.com/genshin-impact/com.miHoYo.GenshinImpact/.
            2. -
            3. Select the latest version of the game that you want to download and tap on "Download APK".
            4. -
            5. Wait for the game to download on your device.
            6. -
            7. Go to your device's file manager and locate the downloaded APK file. Tap on it to install it on your device.
            8. -
            9. Once the installation is complete, tap on "Open" to launch the game.
            10. -
            11. Follow the instructions on the screen to download any additional data or resources that the game might require.
            12. -
            -
          9. How do I contact the customer service of Genshin Impact on Android?
          10. -

            If you have any questions, issues, or feedback regarding Genshin Impact on Android, you can contact the customer service of the game by following these steps:

            -
              -
            1. Launch the game and tap on the menu icon on the top left corner of the screen.
            2. -
            3. Select "Paimon Menu" from the menu and then select "Feedback".
            4. -
            5. You will see various options for feedback, such as bug report, suggestion, complaint, etc. Select the option that best suits your situation.
            6. -
            7. You will then see a form where you can fill in your details, such as your UID, server, device model, OS version, etc. You can also attach screenshots or videos to illustrate your problem or suggestion.
            8. -
            9. Once you have filled in all the required fields, tap on "Submit" to send your feedback to the customer service team.
            10. -
            11. You will receive a confirmation message and a ticket number. You can check the status of your ticket by tapping on "My Feedback" at the bottom of the screen.
            12. -
            -

            You can also contact the customer service team by sending an email to genshin_cs@mihoyo.com or by visiting their official website at https://genshin.mihoyo.com/en/home.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/sinz2002/ChuanhuChatGPT/run_Linux.sh b/spaces/sinz2002/ChuanhuChatGPT/run_Linux.sh deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/sinz2002/ChuanhuChatGPT/run_Linux.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/sky009/Qiliang-bart-large-cnn-samsum-ChatGPT_v3/app.py b/spaces/sky009/Qiliang-bart-large-cnn-samsum-ChatGPT_v3/app.py deleted file mode 100644 index 71326d3eb3dc9e33e3ca6f91084cf7cfe3d6194f..0000000000000000000000000000000000000000 --- a/spaces/sky009/Qiliang-bart-large-cnn-samsum-ChatGPT_v3/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Qiliang/bart-large-cnn-samsum-ChatGPT_v3").launch() \ No newline at end of file diff --git a/spaces/skytnt/lyric-generator-ja/frontend/src/router/index.js b/spaces/skytnt/lyric-generator-ja/frontend/src/router/index.js deleted file mode 100644 index a5c708902aacb8b020942374a1efd4969e0f3e5c..0000000000000000000000000000000000000000 --- a/spaces/skytnt/lyric-generator-ja/frontend/src/router/index.js +++ /dev/null @@ -1,18 +0,0 @@ -import Vue from 'vue' -import VueRouter from 'vue-router' - -Vue.use(VueRouter) - -const routes = [ - // { - // path: '/', - // name: 'Home', - // component: Home - // } -] - -const router = new VueRouter({ - routes -}) - -export default router diff --git a/spaces/splendid/image-generate/README.md b/spaces/splendid/image-generate/README.md deleted file mode 100644 index 545d4d24a5cdc494625693b2245c7a8409e94a4d..0000000000000000000000000000000000000000 --- a/spaces/splendid/image-generate/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Generate -emoji: 📉 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: cc ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/spock74/whisper-speaker-diarization/app.py b/spaces/spock74/whisper-speaker-diarization/app.py deleted file mode 100644 index e48567fe49dbb0dc5a09c09843dcc90c986e923e..0000000000000000000000000000000000000000 --- a/spaces/spock74/whisper-speaker-diarization/app.py +++ /dev/null @@ -1,416 +0,0 @@ -import whisper -import datetime -import subprocess -import gradio as gr -from pathlib import Path -import pandas as pd -import re -import time -import os -import numpy as np -from sklearn.cluster import AgglomerativeClustering - -from pytube import YouTube -import torch -import pyannote.audio -from pyannote.audio.pipelines.speaker_verification import PretrainedSpeakerEmbedding -from pyannote.audio import Audio -from pyannote.core import Segment - -from gpuinfo import GPUInfo - -import wave -import contextlib -from transformers import pipeline -import psutil - -whisper_models = ["base", "small", "medium", "large"] -source_languages = { - "en": "English", - "zh": "Chinese", - "de": "German", - "es": "Spanish", - "ru": "Russian", - "ko": "Korean", - "fr": "French", - "ja": "Japanese", - "pt": "Portuguese", - "tr": "Turkish", - "pl": "Polish", - "ca": "Catalan", - "nl": "Dutch", - "ar": "Arabic", - "sv": "Swedish", - "it": "Italian", - "id": "Indonesian", - "hi": "Hindi", - "fi": "Finnish", - "vi": "Vietnamese", - "he": "Hebrew", - "uk": "Ukrainian", - "el": "Greek", - "ms": "Malay", - "cs": "Czech", - "ro": "Romanian", - "da": "Danish", - "hu": "Hungarian", - "ta": "Tamil", - "no": "Norwegian", - "th": "Thai", - "ur": "Urdu", - "hr": "Croatian", - "bg": "Bulgarian", - "lt": "Lithuanian", - "la": "Latin", - "mi": "Maori", - "ml": "Malayalam", - "cy": "Welsh", - "sk": "Slovak", - "te": "Telugu", - "fa": "Persian", - "lv": "Latvian", - "bn": "Bengali", - "sr": "Serbian", - "az": "Azerbaijani", - "sl": "Slovenian", - "kn": "Kannada", - "et": "Estonian", - "mk": "Macedonian", - "br": "Breton", - "eu": "Basque", - "is": "Icelandic", - "hy": "Armenian", - "ne": "Nepali", - "mn": "Mongolian", - "bs": "Bosnian", - "kk": "Kazakh", - "sq": "Albanian", - "sw": "Swahili", - "gl": "Galician", - "mr": "Marathi", - "pa": "Punjabi", - "si": "Sinhala", - "km": "Khmer", - "sn": "Shona", - "yo": "Yoruba", - "so": "Somali", - "af": "Afrikaans", - "oc": "Occitan", - "ka": "Georgian", - "be": "Belarusian", - "tg": "Tajik", - "sd": "Sindhi", - "gu": "Gujarati", - "am": "Amharic", - "yi": "Yiddish", - "lo": "Lao", - "uz": "Uzbek", - "fo": "Faroese", - "ht": "Haitian creole", - "ps": "Pashto", - "tk": "Turkmen", - "nn": "Nynorsk", - "mt": "Maltese", - "sa": "Sanskrit", - "lb": "Luxembourgish", - "my": "Myanmar", - "bo": "Tibetan", - "tl": "Tagalog", - "mg": "Malagasy", - "as": "Assamese", - "tt": "Tatar", - "haw": "Hawaiian", - "ln": "Lingala", - "ha": "Hausa", - "ba": "Bashkir", - "jw": "Javanese", - "su": "Sundanese", -} - -source_language_list = [key[0] for key in source_languages.items()] - -MODEL_NAME = "vumichien/whisper-medium-jp" -lang = "ja" - -device = 0 if torch.cuda.is_available() else "cpu" -pipe = pipeline( - task="automatic-speech-recognition", - model=MODEL_NAME, - chunk_length_s=30, - device=device, -) - -pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=lang, task="transcribe") - -embedding_model = PretrainedSpeakerEmbedding( - "speechbrain/spkrec-ecapa-voxceleb", - device=torch.device("cuda" if torch.cuda.is_available() else "cpu")) - -def transcribe(microphone, file_upload): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - file = microphone if microphone is not None else file_upload - - text = pipe(file)["text"] - - return warn_output + text - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'
            ' - "
            " - ) - return HTML_str - -def yt_transcribe(yt_url): - yt = YouTube(yt_url) - html_embed_str = _return_yt_html_embed(yt_url) - stream = yt.streams.filter(only_audio=True)[0] - stream.download(filename="audio.mp3") - - text = pipe("audio.mp3")["text"] - - return html_embed_str, text - -def convert_time(secs): - return datetime.timedelta(seconds=round(secs)) - -def get_youtube(video_url): - yt = YouTube(video_url) - abs_video_path = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download() - print("Success download video") - print(abs_video_path) - return abs_video_path - -def speech_to_text(video_file_path, selected_source_lang, whisper_model, num_speakers): - """ - # Transcribe youtube link using OpenAI Whisper - 1. Using Open AI's Whisper model to seperate audio into segments and generate transcripts. - 2. Generating speaker embeddings for each segments. - 3. Applying agglomerative clustering on the embeddings to identify the speaker for each segment. - - Speech Recognition is based on models from OpenAI Whisper https://github.com/openai/whisper - Speaker diarization model and pipeline from by https://github.com/pyannote/pyannote-audio - """ - - model = whisper.load_model(whisper_model) - time_start = time.time() - if(video_file_path == None): - raise ValueError("Error no video input") - print(video_file_path) - - try: - # Read and convert youtube video - _,file_ending = os.path.splitext(f'{video_file_path}') - print(f'file enging is {file_ending}') - audio_file = video_file_path.replace(file_ending, ".wav") - print("starting conversion to wav") - os.system(f'ffmpeg -i "{video_file_path}" -ar 16000 -ac 1 -c:a pcm_s16le "{audio_file}"') - - # Get duration - with contextlib.closing(wave.open(audio_file,'r')) as f: - frames = f.getnframes() - rate = f.getframerate() - duration = frames / float(rate) - print(f"conversion to wav ready, duration of audio file: {duration}") - - # Transcribe audio - options = dict(language=selected_source_lang, beam_size=5, best_of=5) - transcribe_options = dict(task="transcribe", **options) - result = model.transcribe(audio_file, **transcribe_options) - segments = result["segments"] - print("starting whisper done with whisper") - except Exception as e: - raise RuntimeError("Error converting video to audio") - - try: - # Create embedding - def segment_embedding(segment): - audio = Audio() - start = segment["start"] - # Whisper overshoots the end timestamp in the last segment - end = min(duration, segment["end"]) - clip = Segment(start, end) - waveform, sample_rate = audio.crop(audio_file, clip) - return embedding_model(waveform[None]) - - embeddings = np.zeros(shape=(len(segments), 192)) - for i, segment in enumerate(segments): - embeddings[i] = segment_embedding(segment) - embeddings = np.nan_to_num(embeddings) - print(f'Embedding shape: {embeddings.shape}') - - # Assign speaker label - clustering = AgglomerativeClustering(num_speakers).fit(embeddings) - labels = clustering.labels_ - for i in range(len(segments)): - segments[i]["speaker"] = 'SPEAKER ' + str(labels[i] + 1) - - # Make output - objects = { - 'Start' : [], - 'End': [], - 'Speaker': [], - 'Text': [] - } - text = '' - for (i, segment) in enumerate(segments): - if i == 0 or segments[i - 1]["speaker"] != segment["speaker"]: - objects['Start'].append(str(convert_time(segment["start"]))) - objects['Speaker'].append(segment["speaker"]) - if i != 0: - objects['End'].append(str(convert_time(segments[i - 1]["end"]))) - objects['Text'].append(text) - text = '' - text += segment["text"] + ' ' - objects['End'].append(str(convert_time(segments[i - 1]["end"]))) - objects['Text'].append(text) - - time_end = time.time() - time_diff = time_end - time_start - memory = psutil.virtual_memory() - gpu_utilization, gpu_memory = GPUInfo.gpu_usage() - gpu_utilization = gpu_utilization[0] if len(gpu_utilization) > 0 else 0 - gpu_memory = gpu_memory[0] if len(gpu_memory) > 0 else 0 - system_info = f""" - *Memory: {memory.total / (1024 * 1024 * 1024):.2f}GB, used: {memory.percent}%, available: {memory.available / (1024 * 1024 * 1024):.2f}GB.* - *Processing time: {time_diff:.5} seconds.* - *GPU Utilization: {gpu_utilization}%, GPU Memory: {gpu_memory}MiB.* - """ - - return pd.DataFrame(objects), system_info - - except Exception as e: - raise RuntimeError("Error Running inference with local model", e) - - -# ---- Gradio Layout ----- -# Inspiration from https://huggingface.co/spaces/RASMUS/Whisper-youtube-crosslingual-subtitles -video_in = gr.Video(label="Video file", mirror_webcam=False) -youtube_url_in = gr.Textbox(label="Youtube url", lines=1, interactive=True) -df_init = pd.DataFrame(columns=['Start', 'End', 'Speaker', 'Text']) -memory = psutil.virtual_memory() -selected_source_lang = gr.Dropdown(choices=source_language_list, type="value", value="en", label="Spoken language in video", interactive=True) -selected_whisper_model = gr.Dropdown(choices=whisper_models, type="value", value="base", label="Selected Whisper model", interactive=True) -number_speakers = gr.Number(precision=0, value=2, label="Selected number of speakers", interactive=True) -system_info = gr.Markdown(f"*Memory: {memory.total / (1024 * 1024 * 1024):.2f}GB, used: {memory.percent}%, available: {memory.available / (1024 * 1024 * 1024):.2f}GB*") -transcription_df = gr.DataFrame(value=df_init,label="Transcription dataframe", row_count=(0, "dynamic"), max_rows = 10, wrap=True, overflow_row_behaviour='paginate') -title = "Whisper speaker diarization" -demo = gr.Blocks(title=title) -demo.encrypt = False - - -with demo: - with gr.Tab("Whisper speaker diarization"): - gr.Markdown(''' -
            -

            Whisper speaker diarization

            - This space uses Whisper models from OpenAI to recoginze the speech and ECAPA-TDNN model from SpeechBrain to encode and clasify speakers -
            - ''') - - with gr.Row(): - gr.Markdown(''' - ### Transcribe youtube link using OpenAI Whisper - ##### 1. Using Open AI's Whisper model to seperate audio into segments and generate transcripts. - ##### 2. Generating speaker embeddings for each segments. - ##### 3. Applying agglomerative clustering on the embeddings to identify the speaker for each segment. - ''') - - with gr.Row(): - gr.Markdown(''' - ### You can test by following examples: - ''') - examples = gr.Examples(examples= - [ "https://www.youtube.com/watch?v=j7BfEzAFuYc&t=32s", - "https://www.youtube.com/watch?v=-UX0X45sYe4", - "https://www.youtube.com/watch?v=7minSgqi-Gw"], - label="Examples", inputs=[youtube_url_in]) - - - with gr.Row(): - with gr.Column(): - youtube_url_in.render() - download_youtube_btn = gr.Button("Download Youtube video") - download_youtube_btn.click(get_youtube, [youtube_url_in], [ - video_in]) - print(video_in) - - - with gr.Row(): - with gr.Column(): - video_in.render() - with gr.Column(): - gr.Markdown(''' - ##### Here you can start the transcription process. - ##### Please select the source language for transcription. - ##### You should select a number of speakers for getting better results. - ''') - selected_source_lang.render() - selected_whisper_model.render() - number_speakers.render() - transcribe_btn = gr.Button("Transcribe audio and diarization") - transcribe_btn.click(speech_to_text, [video_in, selected_source_lang, selected_whisper_model, number_speakers], [transcription_df, system_info]) - - - with gr.Row(): - gr.Markdown(''' - ##### Here you will get transcription output - ##### ''') - - - with gr.Row(): - with gr.Column(): - transcription_df.render() - system_info.render() - gr.Markdown('''
            visitor badgeLicense: Apache 2.0
            ''') - - - - with gr.Tab("Whisper Transcribe Japanese Audio"): - gr.Markdown(f''' -
            -

            Whisper Transcribe Japanese Audio

            -
            - Transcribe long-form microphone or audio inputs with the click of a button! The fine-tuned - checkpoint
            {MODEL_NAME} to transcribe audio files of arbitrary length. - ''') - microphone = gr.inputs.Audio(source="microphone", type="filepath", optional=True) - upload = gr.inputs.Audio(source="upload", type="filepath", optional=True) - transcribe_btn = gr.Button("Transcribe Audio") - text_output = gr.Textbox() - with gr.Row(): - gr.Markdown(''' - ### You can test by following examples: - ''') - examples = gr.Examples(examples= - [ "sample1.wav", - "sample2.wav", - ], - label="Examples", inputs=[upload]) - transcribe_btn.click(transcribe, [microphone, upload], outputs=text_output) - - with gr.Tab("Whisper Transcribe Japanese YouTube"): - gr.Markdown(f''' -
            -

            Whisper Transcribe Japanese YouTube

            -
            - Transcribe long-form YouTube videos with the click of a button! The fine-tuned checkpoint: - {MODEL_NAME} to transcribe audio files of arbitrary length. - ''') - youtube_link = gr.Textbox(label="Youtube url", lines=1, interactive=True) - yt_transcribe_btn = gr.Button("Transcribe YouTube") - text_output2 = gr.Textbox() - html_output = gr.Markdown() - yt_transcribe_btn.click(yt_transcribe, [youtube_link], outputs=[html_output, text_output2]) - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/clib/libnat_cuda/binding.cpp b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/clib/libnat_cuda/binding.cpp deleted file mode 100644 index ced91c0d0afab9071842911d9876e6360d90284a..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/clib/libnat_cuda/binding.cpp +++ /dev/null @@ -1,67 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -/* - This code is partially adpoted from - https://github.com/1ytic/pytorch-edit-distance - */ - -#include -#include "edit_dist.h" - -#ifndef TORCH_CHECK -#define TORCH_CHECK AT_CHECK -#endif - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -torch::Tensor LevenshteinDistance( - torch::Tensor source, - torch::Tensor target, - torch::Tensor source_length, - torch::Tensor target_length) { - CHECK_INPUT(source); - CHECK_INPUT(target); - CHECK_INPUT(source_length); - CHECK_INPUT(target_length); - return LevenshteinDistanceCuda(source, target, source_length, target_length); -} - -torch::Tensor GenerateDeletionLabel( - torch::Tensor source, - torch::Tensor operations) { - CHECK_INPUT(source); - CHECK_INPUT(operations); - return GenerateDeletionLabelCuda(source, operations); -} - -std::pair GenerateInsertionLabel( - torch::Tensor target, - torch::Tensor operations) { - CHECK_INPUT(target); - CHECK_INPUT(operations); - return GenerateInsertionLabelCuda(target, operations); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("levenshtein_distance", &LevenshteinDistance, "Levenshtein distance"); - m.def( - "generate_deletion_labels", - &GenerateDeletionLabel, - "Generate Deletion Label"); - m.def( - "generate_insertion_labels", - &GenerateInsertionLabel, - "Generate Insertion Label"); -} diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/speech_to_text/utils.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/speech_to_text/utils.py deleted file mode 100644 index 168b8bf13b0e734eee3f6989ff0f28a016a09c2b..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/speech_to_text/utils.py +++ /dev/null @@ -1,563 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - - -import logging -from collections.abc import Iterable -from itertools import repeat -from typing import List, Optional, Tuple - -import torch -from torch import Tensor - - -# ------------------------------------------------------------------------------ -# assert_equal() -# ------------------------------------------------------------------------------ - - -def assert_equal(value1, value2, name1=None, name2=None): - """Asserts two values are equal otherwise raise an error.""" - - str_name1 = "" if name1 is None else "{} ".format(name1) - str_name2 = "" if name2 is None else "{} ".format(name2) - if value1 != value2: - str_value1 = "{}" if name1 is None else "({})" - str_value1 = str_value1.format(value1) - str_value2 = "{}" if name2 is None else "({})" - str_value2 = str_value2.format(value2) - raise ValueError( - "Expected {}{} == {}{}".format(str_name1, str_value1, str_name2, str_value2) - ) - - -def fill_config(config, key, value): - if value is not None: - if key not in config or config[key] is None: - config[key] = value - assert_equal(value, config[key], "value", f'config["{key}"]') - - -# ------------------------------------------------------------------------------ -# check_and_return_expected() -# ------------------------------------------------------------------------------ - - -def check_and_return_expected(value, undefined_value, expected_value, name=None): - """ - Return the expected value while checking if the given value is undefined or - equal to the expected value. - """ - if (undefined_value is None and value is None) or (undefined_value == value): - return expected_value - if value != expected_value: - str_name = "" if name is None else "{} ".format(name) - str_value = "{}" if name is None else "({})" - str_value = str_value.format(value) - raise ValueError( - "Expected {}{} == {}".format(str_name, str_value, expected_value) - ) - return expected_value - - -# ------------------------------------------------------------------------------ -# get_time_axis() -# ------------------------------------------------------------------------------ - - -def get_time_axis(layout): - """ - Extract the time axis from the layout, for example for breaking sequence into - segments. - """ - if layout in ["TB", "TBD"]: - return 0 - if layout in ["BT", "BTD"]: - return 1 - if layout in ["BCTD"]: - return 2 - raise ValueError("Unsupported layout = {}".format(layout)) - - -# ------------------------------------------------------------------------------ -# get_batch_axis() -# ------------------------------------------------------------------------------ - - -def get_batch_axis(layout): - """ - Extract the batch axis from the layout - """ - if layout in ["TB", "TBD"]: - return 1 - if layout in ["BT", "BTD", "BCTD"]: - return 0 - raise ValueError("Unsupported layout = {}".format(layout)) - - -# ------------------------------------------------------------------------------ -# monotonically_increasing_and_bounded() -# ------------------------------------------------------------------------------ - - -def monotonically_increasing_and_bounded(iterable, min=None, max=None): - """ - Check if the elements in the given iterable are monotonically increasing and - bounded by upper/lower bounds. - """ - if not isinstance(iterable, Iterable): - raise TypeError( - "Expected iterable to be of type Iterable, got ({})".format( - iterable.__class__.__name__ - ) - ) - for i in range(len(iterable)): - if min is not None and iterable[i] < min: - return False - if max is not None and iterable[i] > max: - return False - if i > 0 and iterable[i] <= iterable[i - 1]: - return False - return True - - -# ------------------------------------------------------------------------------ -# to_pair() -# ------------------------------------------------------------------------------ - - -def to_pair(value, name): - """Make a pair (of type tuple) of given value.""" - if isinstance(value, Iterable): - if len(value) != 2: - raise ValueError( - "Expected `{}` to have exactly 2 elements, got: ({})".format( - name, value - ) - ) - return value - return tuple(repeat(value, 2)) - - -# ------------------------------------------------------------------------------ -# infer_conv_output_attrs() -# ------------------------------------------------------------------------------ - - -# TODO(cfyeh): figure out if we can get `output_dim` without calling the module. -def infer_conv_output_attrs( - module, input_channels, input_dim, batch_size=1, max_length=8 -): - """Get output attributes of a module with input.""" - input = torch.randn(batch_size, input_channels, max_length, input_dim) - output = module(input) - output_channels = output.shape[1] - output_dim = output.shape[-1] - return output_channels, output_dim - - -# ------------------------------------------------------------------------------ -# NoOp -# ------------------------------------------------------------------------------ - - -class NoOp(torch.nn.Module): - """ - NoOp simply passes the input as the output. - """ - - def __init__(self): - super().__init__() - - def forward(self, input: Tensor) -> Tensor: - return input - - -# ------------------------------------------------------------------------------ -# Permute: a torch.nn.Module applies permutation on the input tensor. -# ------------------------------------------------------------------------------ - - -class Permute(torch.nn.Module): - def __init__(self, dims): - super().__init__() - self.dims = dims - - def forward(self, input: Tensor) -> Tensor: - return input.permute(self.dims).contiguous() - - -# ------------------------------------------------------------------------------ -# lengths_to_padding_mask() -# ------------------------------------------------------------------------------ - - -def lengths_to_padding_mask(lengths: Tensor) -> Tensor: - """Convert lengths of shape (B, ) to padding mask.""" - batch_size = lengths.shape[0] - max_length = int(torch.max(lengths).item()) - padding_mask = torch.arange( # [0, ..., T-1] - max_length, device=lengths.device, dtype=lengths.dtype - ).expand(batch_size, max_length) >= lengths.unsqueeze(1) - - return padding_mask - - -# ------------------------------------------------------------------------------ -# lengths_to_attention_mask() -# ------------------------------------------------------------------------------ - - -def lengths_to_attention_mask( - lengths: Tensor, - left_context: Optional[int] = None, - right_context: Optional[int] = None, -) -> Optional[Tensor]: - """ - Generate attention mask based on (lengths, left_context, right_context). - left_context is None means unlimited left context. - right_context is None means unlimited right context. - """ - - if left_context is None and right_context is None: - return None - - max_length = int(torch.max(lengths).item()) - - # For example, with `max_length` == 5, - # indices = tensor([ - # [ 0, 1, 2, 3, 4, 5], - # [-1, 0, 1, 2, 3, 4], - # [-2, -1, 0, 1, 2, 3], - # [-3, -2, -1, 0, 1, 2], - # [-4, -3, -2, -1, 0, 1], - # [-5, -4, -3, -2, -1, 0], - # ]) - - # In some cases the second torch.arange is created on cpu which causes a - # failure. Adding the device option to guard against it. - indices = torch.arange( - max_length, device=lengths.device, dtype=lengths.dtype - ).expand(max_length, max_length) - torch.arange( - max_length, device=lengths.device - ).view( - max_length, -1 - ) - - # For example, with `max_length` == 5, - # bool_mask = tensor([ - # [True, True, True, True, True], - # [True, True, True, True, True], - # [True, True, True, True, True], - # [True, True, True, True, True], - # [True, True, True, True, True], - # ]) - bool_mask = ( - torch.tensor([True]).to(device=lengths.device).expand(max_length, max_length) - ) - - # For example, with `max_length` == 5, left_context == 2 - # left_mask = tensor([ - # [ True, True, True, True, True], - # [ True, True, True, True, True], - # [ True, True, True, True, True], - # [False, True, True, True, True], - # [False, False, True, True, True], - # ]) - if left_context is not None: - left_mask = indices >= -left_context - bool_mask = bool_mask & left_mask - - # For example, with `max_length` == 5, right_context == 1 - # right_mask = tensor([ - # [True, True, False, False, False], - # [True, True, True, False, False], - # [True, True, True, True, False], - # [True, True, True, True, True], - # [True, True, True, True, True], - # ]) - if right_context is not None: - right_mask = indices <= right_context - bool_mask = bool_mask & right_mask - - bool_mask = (~bool_mask).to(device=lengths.device) - return bool_mask - - -# ------------------------------------------------------------------------------ -# infer_output_norm() -# ------------------------------------------------------------------------------ - - -def infer_output_norm(module, output_norm=None): - """ - Infer the output norm (string and module) needed on the module gvien desired - output normalization. - """ - if output_norm == module.output_norm(): - # output_norm already matches module.output_norm(). - return (None, NoOp()) - - if output_norm is None and module.output_norm() is not None: - logger = logging.getLogger("infer_output_norm()") - logger.warning( - "trying to set output_norm ({}) ".format(output_norm) - + "but got module.output_norm() ({}), ".format(module.output_norm()) - + "the combined output_norm() will be ({})".format(module.output_norm()) - ) - return (None, NoOp()) - - if output_norm == "log_softmax": - if module.output_norm() is not None: - raise ValueError( - "incompatible output_norm ({}) ".format(output_norm) - + "and module.output_norm() ({})".format(module.output_norm()) - ) - else: - return ("log_softmax", torch.nn.LogSoftmax(dim=-1)) - - if output_norm == "softmax": - if module.output_norm() is not None: - raise ValueError( - "incompatible output_norm ({}) ".format(output_norm) - + "and module.output_norm() ({})".format(module.output_norm()) - ) - else: - return ("softmax", torch.nn.Softmax(dim=-1)) - - raise ValueError( - "output_norm ({}) not in ".format(output_norm) - + "supported list = [None, softmax, log_softmax]" - ) - - -# ------------------------------------------------------------------------------ -# infer_channels_from_layout() -# ------------------------------------------------------------------------------ - - -def infer_channels_from_layout(layout, channels): - """Extract the number of channels from the layout.""" - if layout in ("TBD", "BTD"): - if channels is not None and channels != 1: - raise ValueError( - "Expected channels ({}) to be 1 for layout = {}".format( - channels, layout - ) - ) - if channels is None: - return 1 - return channels - - -# ------------------------------------------------------------------------------ -# pad_sequence() -# ------------------------------------------------------------------------------ - - -@torch.jit.export -def pad_sequence( - sequence: Tensor, - time_axis: int, - extra_left_context: int = 0, - extra_right_context: int = 0, -) -> Tensor: - """Pad extra left/right contexts to the sequence.""" - - if extra_left_context == 0 and extra_right_context == 0: - return sequence - - tensors_to_concat = [] - - if extra_left_context: - size = (extra_left_context,) - fill_value = 0 - indices = torch.full( - size=size, - fill_value=fill_value, - dtype=torch.long, - device=sequence.device, - ) - left_padding = torch.index_select(sequence, time_axis, indices) - tensors_to_concat.append(left_padding) - - tensors_to_concat.append(sequence) - - # NOTE(cfyeh): for efficiency reason we pad 0 instead of the last frame for - # extra right contexts. - if extra_right_context: - size = list(sequence.shape) - size[time_axis] = extra_right_context - right_padding = torch.zeros(size, dtype=sequence.dtype, device=sequence.device) - tensors_to_concat.append(right_padding) - - padded_sequence = torch.cat(tensors_to_concat, dim=time_axis) - return padded_sequence - - -# ------------------------------------------------------------------------------ -# sequence_to_segments() -# ------------------------------------------------------------------------------ - - -@torch.jit.export -def sequence_to_segments( - sequence: Tensor, - time_axis: int, - lengths: Tensor, - segment_size: Optional[int] = None, - extra_left_context: int = 0, - extra_right_context: int = 0, -) -> List[Tuple[Tensor, Tensor]]: - """Breaks sequence into segments.""" - - sequence = pad_sequence( - sequence=sequence, - time_axis=time_axis, - extra_left_context=extra_left_context, - extra_right_context=extra_right_context, - ) - - lengths = lengths + extra_left_context + extra_right_context - - segments: List[Tuple[Tensor, Tensor]] = [] - - if segment_size is None: - segments.append((sequence, lengths)) - return segments - - offset = 0 - end = sequence.shape[time_axis] - step = segment_size - size = extra_left_context + segment_size + extra_right_context - - while offset + extra_left_context + extra_right_context < end: - clamped_size = min(size, end - offset) - segment_lengths = torch.clamp(lengths - offset, min=0, max=clamped_size) - indices = torch.arange( - start=offset, - end=(offset + clamped_size), - step=1, - dtype=torch.long, - device=sequence.device, - ) - segment_tensor = torch.index_select(sequence, time_axis, indices) - segments.append((segment_tensor, segment_lengths)) - offset = offset + step - - return segments - - -# ------------------------------------------------------------------------------ -# segments_to_sequence() -# ------------------------------------------------------------------------------ - - -@torch.jit.export -def segments_to_sequence( - segments: List[Tuple[Tensor, Tensor]], time_axis: int -) -> Tuple[Tensor, Tensor]: - """Concatenate segments into a full sequence.""" - if len(segments) == 1: - return segments[0] - - tensors_to_concat: List[Tensor] = [] - lengths_to_stack: List[Tensor] = [] - - for tensor, lengths in segments: - tensors_to_concat.append(tensor) - lengths_to_stack.append(lengths) - - sequence = torch.cat(tensors_to_concat, dim=time_axis) - lengths = torch.stack(lengths_to_stack, dim=0) - lengths = torch.sum(lengths, dim=0) - - return sequence, lengths - - -def lengths_to_encoder_padding_mask(lengths, batch_first: bool = False): - """ - convert lengths (a 1-D Long/Int tensor) to 2-D binary tensor - - Args: - lengths: a (B, )-shaped tensor - batch_first: whether to return a (B, T) tensor - - Return: - max_length: maximum length of B sequences - encoder_padding_mask: a (max_length, B) binary mask, where - [t, b] = False for t < lengths[b] and True otherwise - - TODO: - kernelize this function if benchmarking shows this function is slow - """ - max_lengths = torch.max(lengths).item() - bsz = lengths.size(0) - encoder_padding_mask = torch.arange( - max_lengths - ).to( # a (T, ) tensor with [0, ..., T-1] - lengths.device - ).view( # move to the right device - 1, max_lengths - ).expand( # reshape to (1, T)-shaped tensor - bsz, -1 - ) > lengths.view( # expand to (B, T)-shaped tensor - bsz, 1 - ).expand( - -1, max_lengths - ) - if not batch_first: - return encoder_padding_mask.t(), max_lengths - else: - return encoder_padding_mask, max_lengths - - -# ------------------------------------------------------------------------------ -# attention suppression -# ------------------------------------------------------------------------------ - - -def attention_suppression(attention_weights: Tensor, scale: float): - # B, H, qlen, klen -> B, H, qlen, 1 - attention_prob = torch.nn.functional.softmax(attention_weights.float(), dim=-1) - attention_nozeros = attention_prob.to(torch.bool) - nozeros_sum = torch.sum(attention_nozeros.to(torch.float), dim=-1, keepdim=True) - - # For very sparse situation, we need get round about 0s - key_sum = torch.sum(attention_prob, dim=-1, keepdim=True) - - # nozeros_sum should > 1 - key_mean = key_sum / (nozeros_sum + 1e-8) - - # std calculation - dis = (attention_prob - key_mean) * (attention_prob - key_mean) - - # if attention_prob[i] < threshold, then dis_masked[i] = 0; for all i - dis_masked = torch.where( - attention_nozeros, dis, attention_prob.new_zeros(attention_prob.size()) - ) - - key_var = torch.sum(dis_masked, dim=-1, keepdim=True) - key_var = key_var / (nozeros_sum - 1.0 + 1e-8) - key_std = torch.sqrt(key_var) - key_thread = key_mean - scale * key_std - - # if attention_prob[i] >= key_thread, then attention_prob[i] - # , otherwise "-inf" - inf_tensor = attention_prob.new_zeros(attention_prob.size()).detach() - inf_tensor[:] = float("-inf") - attention_weights_float = torch.where( - attention_prob < key_thread, - inf_tensor, - attention_weights.float(), - ) - - return attention_weights_float.type_as(attention_weights) - - -def layer_norm_backward_hook(module, grad_input, grad_output, clamp_value): - return tuple(torch.clamp(v, min=-clamp_value, max=clamp_value) for v in grad_input) diff --git a/spaces/starlit7/USPoliticsTTS/text/korean.py b/spaces/starlit7/USPoliticsTTS/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/starlit7/USPoliticsTTS/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/step-3-profit/Midnight-Deep/app.py b/spaces/step-3-profit/Midnight-Deep/app.py deleted file mode 100644 index 544ebb650fe6c84e856ac47f684515cd65eb56f6..0000000000000000000000000000000000000000 --- a/spaces/step-3-profit/Midnight-Deep/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import time - -import gradio as gr -from gradio.themes.utils.theme_dropdown import create_theme_dropdown - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='step-3-profit/Midnight-Deep') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `Midnight-Deep` - To use this theme, set `theme='step-3-profit/Midnight-Deep'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/stomexserde/gpt4-ui/Examples/HDD Regenerator V1.71 Pro .ISO - 2010kaiser ((FULL)).md b/spaces/stomexserde/gpt4-ui/Examples/HDD Regenerator V1.71 Pro .ISO - 2010kaiser ((FULL)).md deleted file mode 100644 index 9accbdd5b75cced59b2413d2737e05613ecda56d..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/HDD Regenerator V1.71 Pro .ISO - 2010kaiser ((FULL)).md +++ /dev/null @@ -1,32 +0,0 @@ -
            -

            How to Repair Bad Sectors on Your Hard Drive with HDD Regenerator v1.71 Pro .ISO - 2010kaiser

            - -

            Bad sectors are a common problem that can affect the performance and reliability of your hard drive. They are parts of the disk surface that contain unreadable or corrupted data, which can cause errors, crashes, or data loss. If you have bad sectors on your hard drive, you may want to try HDD Regenerator v1.71 Pro .ISO - 2010kaiser, a powerful tool that can scan and repair bad sectors without affecting your existing data.

            -

            HDD Regenerator v1.71 Pro .ISO - 2010kaiser


            DOWNLOAD ››› https://urlgoal.com/2uI8EI



            - -

            What is HDD Regenerator v1.71 Pro .ISO - 2010kaiser?

            - -

            HDD Regenerator v1.71 Pro .ISO - 2010kaiser is a software program that can detect and fix bad sectors on your hard drive using a unique technology called Hysteresis loops generator. This technology can restore the magnetic properties of the disk surface, making the data readable again. HDD Regenerator v1.71 Pro .ISO - 2010kaiser can work with any file system, such as FAT, NTFS, EXT3, HFS+, etc., and any operating system, such as Windows, Linux, Mac OS, etc. It can also work with unformatted or unpartitioned disks, as well as disks with 4K sector size.

            - -

            How to use HDD Regenerator v1.71 Pro .ISO - 2010kaiser?

            - -

            To use HDD Regenerator v1.71 Pro .ISO - 2010kaiser, you need to download the ISO file from a trusted source[^2^] and burn it to a CD or DVD using a burning software. Alternatively, you can create a bootable USB flash drive using the program's built-in feature. Then, you need to boot your computer from the CD/DVD or USB flash drive and follow the instructions on the screen. You can choose between two modes: normal scan and prescan. Normal scan mode is faster and can repair most of the bad sectors on your hard drive. Prescan mode is slower but more thorough and can locate all the bad sectors on your hard drive before repairing them.

            - -

            What are the benefits of HDD Regenerator v1.71 Pro .ISO - 2010kaiser?

            - -

            HDD Regenerator v1.71 Pro .ISO - 2010kaiser has many benefits for users who want to fix their hard drives with bad sectors. Some of them are:

            - -
              -
            • It does not affect your existing data or file system, so you don't need to backup or format your hard drive before using it.
            • -
            • It can repair up to 60% of hard drives with bad sectors[^5^], which is better than most other tools on the market.
            • -
            • It has a user-friendly interface and easy-to-use settings, so you don't need to be an expert to use it.
            • -
            • It has a real-time hard drive condition monitor that can alert you of any problems or overheating issues with your hard drive.
            • -
            • It has a 30-day money back guarantee and a free one year minor updates policy[^5^], so you can buy it with confidence and enjoy its latest features.
            • -
            - -

            Conclusion

            - -

            If you have bad sectors on your hard drive and want to repair them without losing your data or changing your file system, HDD Regenerator v1.71 Pro .ISO - 2010kaiser is a great option for you. It can scan and fix bad sectors on your hard drive using a unique technology that restores the magnetic properties of the disk surface. It can work with any file system and any operating system, as well as unformatted or unpartitioned disks and disks with 4K sector size. It has a user-friendly interface and easy-to-use settings, as well as a real-time hard drive condition monitor and a money back guarantee policy. You can download HDD Regenerator v1.71 Pro .ISO - 2010kaiser from a trusted source[^2^] and burn it to a CD/DVD or USB flash drive to start repairing your hard drive today.

            -

            81aa517590
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Imovie Free Download For Mac El Capitan.md b/spaces/stomexserde/gpt4-ui/Examples/Imovie Free Download For Mac El Capitan.md deleted file mode 100644 index d6d9def4f207f80e1e00a32533354421c6565a26..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Imovie Free Download For Mac El Capitan.md +++ /dev/null @@ -1,29 +0,0 @@ -
            -

            How to Download iMovie for Mac El Capitan

            -

            iMovie is a popular video editing software for Mac devices. It lets you create stunning movies, trailers, and slideshows with ease. However, if you have an older Mac running OS X El Capitan (10.11), you might have trouble finding and installing iMovie from the App Store. In this article, we will show you how to download iMovie for Mac El Capitan in two ways.

            -

            Method 1: Download iMovie 9.0.9 and Upgrade

            -

            One way to get iMovie for Mac El Capitan is to download iMovie 9.0.9 from Apple Support[^1^] and then upgrade to iMovie 10 from the App Store. This method works because iMovie 9.0.9 is compatible with El Capitan, and once you have it installed, you can update it to the latest version of iMovie that supports your OS. Here are the steps to follow:

            -

            Imovie Free Download For Mac El Capitan


            DOWNLOAD >>>>> https://urlgoal.com/2uI6sO



            -
              -
            1. Go to this link and click on "Download" to download iMovie 9.0.9.
            2. -
            3. Open the downloaded file and follow the instructions to install iMovie 9.0.9 on your Mac.
            4. -
            5. Launch iMovie 9.0.9 and go to the menu bar and click on "iMovie" > "Check for Updates".
            6. -
            7. You should see a notification that an update is available for iMovie. Click on "Update" to download and install iMovie 10 from the App Store.
            8. -
            9. Once the update is complete, you can enjoy using iMovie 10 on your Mac El Capitan.
            10. -
            -

            Method 2: Download iMovie 10 from a Friend's Mac

            -

            Another way to get iMovie for Mac El Capitan is to download iMovie 10 from a friend's Mac that has it installed. This method works because iMovie 10 is a free app that can be shared with other Mac users via Family Sharing or AirDrop. However, this method requires your friend's cooperation and a stable internet connection. Here are the steps to follow:

            -
              -
            1. Ask your friend to open the App Store on their Mac and go to the "Purchased" tab.
            2. -
            3. They should see iMovie in the list of purchased apps. They need to click on the cloud icon next to it to download it if they haven't already.
            4. -
            5. Once iMovie is downloaded, they need to go to the "Applications" folder and find iMovie.app.
            6. -
            7. They need to right-click on iMovie.app and choose "Compress iMovie" to create a zip file of the app.
            8. -
            9. They need to share the zip file with you via Family Sharing or AirDrop.
            10. -
            11. You need to accept the zip file and save it on your Mac.
            12. -
            13. You need to unzip the file and move iMovie.app to your "Applications" folder.
            14. -
            15. You need to launch iMovie.app and enjoy using it on your Mac El Capitan.
            16. -
            -

            Conclusion

            -

            iMovie is a great video editing software for Mac users who want to create amazing movies, trailers, and slideshows. However, if you have an older Mac running OS X El Capitan (10.11), you might not be able to find and install it from the App Store. In this article, we showed you how to download iMovie for Mac El Capitan in two ways: by downloading iMovie 9.0.9 and upgrading it, or by downloading iMovie 10 from a friend's Mac. We hope this article was helpful and that you can enjoy using iMovie on your Mac El Capitan.

            cec2833e83
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/James Cameron The Avatar Game Keygen Torrent [PATCHED].md b/spaces/stomexserde/gpt4-ui/Examples/James Cameron The Avatar Game Keygen Torrent [PATCHED].md deleted file mode 100644 index cebcfdbd552842b598d35d4753f854604f70c769..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/James Cameron The Avatar Game Keygen Torrent [PATCHED].md +++ /dev/null @@ -1,77 +0,0 @@ - -

            James Cameron The Avatar Game Keygen Torrent: How to Download and Play the Game for Free

            -

            If you are a fan of James Cameron's Avatar, the epic sci-fi movie that took the world by storm in 2009, you might be interested in playing the official video game based on the film. James Cameron's Avatar: The Game is an action-adventure game that lets you experience the stunning world of Pandora, an alien planet inhabited by the Na'vi, a blue-skinned humanoid race. You can choose to play as either a human soldier from the RDA corporation, who are mining Pandora for a valuable mineral called unobtanium, or as a Na'vi warrior, who are fighting to protect their home from the invaders.

            -

            James Cameron The Avatar Game Keygen Torrent


            Download Ziphttps://urlgoal.com/2uI9Jb



            -

            However, there is a catch: the game is not free. You need to buy it from an online store or a physical retailer, and you also need to activate it with a unique key that comes with the purchase. But what if you don't want to spend money on the game, or you can't find it in your region? Is there a way to play the game for free?

            -

            The answer is yes, there is. You can use a keygen torrent, which is a file that contains both the game installer and a key generator that can create an activation key for you. A keygen torrent is a type of pirated software that bypasses the official security measures of the game developer and publisher. By using a keygen torrent, you can download and install the game without paying anything, and play it as much as you want.

            -

            But before you rush to download a keygen torrent for James Cameron's Avatar: The Game, you should know some important things. First of all, using a keygen torrent is illegal and unethical. You are violating the intellectual property rights of the game creators and distributors, and you are also depriving them of their deserved income. Secondly, using a keygen torrent is risky and dangerous. You might expose your computer to viruses, malware, spyware, or other harmful programs that can damage your system or steal your personal information. You might also face legal consequences if you are caught downloading or using pirated software.

            -

            Therefore, we do not recommend or endorse using a keygen torrent for James Cameron's Avatar: The Game, or any other game for that matter. We strongly advise you to buy the game from an authorized source and support the game industry. However, if you still want to use a keygen torrent at your own risk and responsibility, we will show you how to do it in this article. Here are the steps you need to follow:

            -

            How to download and install James Cameron The Avatar Game keygen torrent

            -

            Step 1: Find a reliable torrent site

            -

            A torrent site is a website that hosts torrent files, which are small files that contain information about larger files that can be downloaded from other users through a peer-to-peer network. A torrent site usually has a search engine that allows you to find the files you want by typing keywords or browsing categories. Some of the most popular torrent sites are The Pirate Bay, RARBG, 1337x, YTS, EZTV, etc.

            -

            -

            However, not all torrent sites are reliable

            However, not all torrent sites are reliable or safe. Some of them may have fake or malicious files, or they may be blocked or banned by your internet service provider or government. Therefore, you should do some research before choosing a torrent site. You can use online tools like TorrentFreak, Torrentz2, or Torrents.me to find out the best and most trusted torrent sites for your needs. You can also use a VPN (virtual private network) service to hide your IP address and encrypt your traffic, which can help you access blocked sites and protect your privacy.

            -

            Step 2: Search for the game keygen torrent

            -

            Once you have found a reliable torrent site, you can search for the game keygen torrent by typing "James Cameron The Avatar Game keygen torrent" or something similar in the search box. You will see a list of results that match your query. You should look for the ones that have the most seeders and leechers, which are the numbers that indicate how many users are sharing and downloading the file. The higher the numbers, the faster and easier the download will be.

            -

            You should also check the comments and ratings of the file, which can give you an idea of its quality and safety. You can read what other users have said about the file, whether it works or not, whether it has viruses or not, whether it has good graphics or not, etc. You should avoid the files that have negative feedback or low ratings, as they may be fake or harmful.

            -

            Another thing you should check is the size and format of the file. The game keygen torrent should contain both the game installer and the key generator, which are usually in .exe or .rar format. The size of the file should be around 4 GB, which is the approximate size of the game. If the file is too small or too large, or if it has a different format, it may be suspicious or incomplete.

            -

            Step 3: Download the torrent file and open it with a torrent client

            -

            After you have selected the game keygen torrent that you want to download, you need to download the torrent file, which is a small file that contains the information about the larger file that you want to download. You can download the torrent file by clicking on the download button or link on the torrent site.

            -

            Once you have downloaded the torrent file, you need to open it with a torrent client, which is a software that allows you to download and upload files through a peer-to-peer network. A torrent client connects you to other users who have the same file that you want to download, and transfers data between them. Some of the most popular torrent clients are uTorrent, BitTorrent, qBittorrent, Vuze, etc.

            -

            You can download and install a torrent client from its official website or from a trusted source. You should also update it regularly to ensure its performance and security. After you have installed a torrent client, you can open the torrent file with it by double-clicking on it or by dragging and dropping it into the client's window. The client will start downloading the game keygen torrent automatically.

            -

            Step 4: Run the keygen and generate an activation key

            -

            When the download is complete, you will have a folder that contains both the game installer and the key generator. The key generator is a program that can create an activation key for you, which is a code that you need to enter when you install the game to activate it. The activation key is usually a combination of letters and numbers.

            -

            To run the keygen and generate an activation key, you need to do the following:

            -
              -
            • Open the folder that contains the game keygen torrent.
            • -
            • Find and run the key generator program, which is usually named "keygen.exe" or something similar.
            • -
            • Click on "Generate" or "Create" button to generate an activation key.
            • -
            • Copy or write down the activation key that appears on your screen.
            • -
            -

            You should also scan the key generator program with an antivirus software before running it, as it may contain viruses or malware that can harm your computer or steal your information.

            -

            Step 5: Install the game and enter the activation key

            -

            The final step is to install The final step is to install the game and enter the activation key. To do this, you need to do the following:

              -
            • Open the folder that contains the game keygen torrent.
            • -
            • Find and run the game installer program, which is usually named "setup.exe" or something similar.
            • -
            • Follow the instructions on the screen to install the game on your computer. You may need to choose the language, destination folder, and other options.
            • -
            • When prompted, enter the activation key that you generated with the key generator. You may need to copy and paste it or type it manually.
            • -
            • Wait for the installation to finish and launch the game from your desktop or start menu.
            • -
            -

            Congratulations, you have successfully downloaded and installed James Cameron's Avatar: The Game using a keygen torrent. You can now enjoy playing the game for free.

            -

            How to play James Cameron The Avatar Game

            -

            Now that you have installed the game, you might be wondering how to play it. Here are some tips and tricks that can help you get started:

            -

            Choose your side: RDA or Na'vi

            -

            The first thing you need to do when you start the game is to choose which side you want to play as: the RDA (Resources Development Administration) or the Na'vi. The RDA are the human soldiers and mercenaries who work for the corporation that is mining Pandora for unobtanium. The Na'vi are the native inhabitants of Pandora who live in harmony with nature and worship a goddess called Eywa.

            -

            Your choice will affect the gameplay, storyline, and environment of the game. If you choose to play as the RDA, you will use advanced weapons and vehicles, such as assault rifles, shotguns, grenades, helicopters, and mechs. You will also have access to a device called an AMP suit, which is a powered exoskeleton that enhances your strength and durability. If you choose to play as the Na'vi, you will use primitive weapons and mounts, such as bows, arrows, spears, knives, horses, and banshees. You will also have access to a special ability called Pandorapedia, which allows you to scan and learn about the flora and fauna of Pandora.

            -

            Each side has its own advantages and disadvantages, so you should choose wisely based on your preference and playstyle. You can also switch sides at any time during the game by visiting a base camp or a tree of vision.

            -

            Explore the world of Pandora

            -

            One of the main attractions of James Cameron's Avatar: The Game is the world of Pandora itself. Pandora is a beautiful and exotic planet that is full of life and mystery. You can explore different regions of Pandora, such as jungles, mountains, swamps, caves, rivers, waterfalls, and floating islands. You can also interact with various creatures and plants that inhabit Pandora, some of which are friendly and some of which are hostile.

            -

            The world of Pandora is dynamic and responsive to your actions. You can affect the environment by destroying or protecting it, which will have consequences for your reputation and alignment. You can also witness day and night cycles, weather changes, and bioluminescence effects that make Pandora more realistic and immersive.

            -

            To explore Pandora more easily, you can use different modes of transportation depending on your side. If you are playing as the RDA, you can use vehicles such as helicopters, mechs, buggies, boats, etc. If you are playing as the Na'vi If you are playing as the Na'vi, you can use mounts such as horses, banshees, direhorses, etc. You can also use a device called a link unit, which allows you to connect your mind with another creature and control it.

            Fight against enemies and complete missions

            -

            As you explore Pandora, you will encounter various enemies that will try to stop you or harm you. These enemies can be either human or alien, depending on your side and alignment. For example, if you are playing as the RDA, you will face Na'vi warriors, wild animals, and rebel soldiers. If you are playing as the Na'vi, you will face RDA soldiers, mercenaries, and machines.

            -

            To fight against enemies, you can use different weapons and abilities depending on your side and character class. If you are playing as the RDA, you can use firearms, explosives, melee weapons, and AMP suits. You can also upgrade your weapons and armor with different attachments and modifications. If you are playing as the Na'vi, you can use bows, arrows, spears, knives, and Pandorapedia. You can also upgrade your skills and abilities with different totems and blessings.

            -

            To progress in the game, you need to complete different missions that are given to you by various characters or factions. These missions can be either main or side missions, and they can involve different objectives such as killing enemies, escorting allies, collecting items, sabotaging facilities, etc. Some missions are optional and some are mandatory, and some have time limits or branching paths. Completing missions will reward you with experience points, money, items, reputation, and alignment points.

            -

            Customize your character and weapons

            -

            Another feature of James Cameron's Avatar: The Game is the customization of your character and weapons. You can create your own avatar (no pun intended) by choosing your gender, appearance, name, voice, and class. You can also change your outfit and accessories with different clothing items and badges that you can buy or unlock in the game.

            -

            You can also customize your weapons by adding different attachments and modifications that can improve their performance and functionality. For example, you can add scopes, silencers, magazines, lasers, etc. to your firearms. You can also change the color and design of your weapons with different skins and decals that you can buy or unlock in the game.

            -

            Customizing your character and weapons can make your gameplay more enjoyable and personalized. You can also show off your style and skills to other players online or offline.

            -

            Conclusion

            -

            In this article, we have shown you how to download and play James Cameron's Avatar: The Game for free using a keygen torrent. We have also given you some tips and tricks on how to play the game and enjoy its features.

            -

            However, we want to remind you that using a keygen torrent is illegal and unethical. You are breaking the law and disrespecting the game creators and distributors. You are also putting your computer and yourself at risk of viruses, malware, spyware, or legal actions. Therefore, we strongly advise you to buy the game from an authorized source and support the game industry.

            -

            If you still want to use a keygen torrent at your own risk and responsibility Therefore, we strongly advise you to buy the game from an authorized source and support the game industry. If you still want to use a keygen torrent at your own risk and responsibility, we hope that this article has been helpful and informative for you. We hope that you have fun playing James Cameron's Avatar: The Game and exploring the amazing world of Pandora. Here are some FAQs that you might have about the game and the keygen torrent:

            FAQs

            -
              -
            1. Q: Is James Cameron's Avatar: The Game a good game?
            2. -
            3. A: James Cameron's Avatar: The Game is a decent game that has received mixed reviews from critics and players. Some of the positive aspects of the game are the graphics, the sound, the gameplay variety, and the online multiplayer mode. Some of the negative aspects of the game are the story, the voice acting, the bugs, and the repetitive missions. The game is not a masterpiece, but it is not a disaster either. It is a game that can be enjoyed by fans of the movie or by casual gamers who like action-adventure games.
            4. -
            5. Q: Is James Cameron's Avatar: The Game related to the movie?
            6. -
            7. A: James Cameron's Avatar: The Game is based on the movie, but it is not a direct adaptation of it. The game is set in 2152, two years before the events of the movie. The game tells a different story that focuses on a conflict between the RDA and the Na'vi over a sacred site called the Well of Souls. The game also features some characters and locations from the movie, such as Jake Sully, Neytiri, Grace Augustine, Colonel Quaritch, Hometree, etc.
            8. -
            9. Q: Is James Cameron's Avatar: The Game compatible with my computer?
            10. -
            11. A: James Cameron's Avatar: The Game has moderate system requirements that can be met by most modern computers. Here are the minimum and recommended system requirements for the game:
            12. -
            - | Minimum | Recommended | | --- | --- | | OS: Windows XP/Vista/7 | OS: Windows XP/Vista/7 | | CPU: 3.2 GHz Intel Pentium 4 or 2.66 GHz Intel Core 2 Duo or AMD Athlon 64 X2 3800+ or better | CPU: Intel Core 2 Duo Family, AMD Athlon 64 X2 5200+, AMD Phenom or better | | RAM: 1 GB (Windows XP) or 2 GB (Windows Vista/7) | RAM: 2 GB | | GPU: 256 MB DirectX 10.0-compliant video card or DirectX 9.0-compliant card with Shader Model 3.0 or higher (NVIDIA GeForce 6800/7/8/9/GTX200 series or ATI RADEON X1650/1950/HD2000/3000/4000 series) | GPU: 512 MB DirectX 10.0-compliant video card or DirectX 9.0-compliant card with Shader Model 3.0 or higher (NVIDIA GeForce GTX260 series or ATI RADEON HD4800 series) | | HDD: 4 GB free space | HDD: 4 GB free space | | Sound: DirectX 9.0-compliant sound card | Sound: DirectX 9.0-compliant sound card |
              -
            1. Q: Is James Cameron's Avatar: The Game safe to download and play using a keygen torrent?
            2. -
            3. A: No, it is not safe to download and play James Cameron's Avatar: The Game using a keygen torrent. As we have mentioned before, using a keygen torrent is illegal and unethical. You are breaking the law and disrespecting the game creators and distributors. You are also putting your computer and yourself at risk of viruses, malware, spyware, or legal actions. You might damage your system or lose your data, or you might face fines or jail time if you are caught downloading or using pirated software.
            4. -
            5. Q: Where can I buy James Cameron's Avatar: The Game legally and safely?
            6. -
            7. A: You can buy James Cameron's Avatar: The Game legally and safely from various online stores or physical retailers that sell video games. Some of the online stores that sell the game are Steam, Ubisoft Store, Amazon, etc. Some of the physical retailers that sell the game are GameStop, Walmart, Best Buy, etc. You can also check your local stores for availability and prices.
            8. -

            b2dd77e56b
            -
            -
            \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/utils/flask_rest_api/README.md b/spaces/stratussox/yolov5_inference/utils/flask_rest_api/README.md deleted file mode 100644 index a726acbd92043458311dd949cc09c0195cd35400..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/utils/flask_rest_api/README.md +++ /dev/null @@ -1,73 +0,0 @@ -# Flask REST API - -[REST](https://en.wikipedia.org/wiki/Representational_state_transfer) [API](https://en.wikipedia.org/wiki/API)s are -commonly used to expose Machine Learning (ML) models to other services. This folder contains an example REST API -created using Flask to expose the YOLOv5s model from [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/). - -## Requirements - -[Flask](https://palletsprojects.com/p/flask/) is required. Install with: - -```shell -$ pip install Flask -``` - -## Run - -After Flask installation run: - -```shell -$ python3 restapi.py --port 5000 -``` - -Then use [curl](https://curl.se/) to perform a request: - -```shell -$ curl -X POST -F image=@zidane.jpg 'http://localhost:5000/v1/object-detection/yolov5s' -``` - -The model inference results are returned as a JSON response: - -```json -[ - { - "class": 0, - "confidence": 0.8900438547, - "height": 0.9318675399, - "name": "person", - "width": 0.3264600933, - "xcenter": 0.7438579798, - "ycenter": 0.5207948685 - }, - { - "class": 0, - "confidence": 0.8440024257, - "height": 0.7155083418, - "name": "person", - "width": 0.6546785235, - "xcenter": 0.427829951, - "ycenter": 0.6334488392 - }, - { - "class": 27, - "confidence": 0.3771208823, - "height": 0.3902671337, - "name": "tie", - "width": 0.0696444362, - "xcenter": 0.3675483763, - "ycenter": 0.7991207838 - }, - { - "class": 27, - "confidence": 0.3527112305, - "height": 0.1540903747, - "name": "tie", - "width": 0.0336618312, - "xcenter": 0.7814827561, - "ycenter": 0.5065554976 - } -] -``` - -An example python script to perform inference using [requests](https://docs.python-requests.org/en/master/) is given -in `example_request.py` diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_run_code.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_run_code.py deleted file mode 100644 index 1e451cb141cbf6a9a952e8706cfdbed559235c64..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_run_code.py +++ /dev/null @@ -1,71 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 17:46 -@Author : alexanderwu -@File : test_run_code.py -""" -import pytest - -from metagpt.actions.run_code import RunCode - - -@pytest.mark.asyncio -async def test_run_text(): - result, errs = await RunCode.run_text("result = 1 + 1") - assert result == 2 - assert errs == "" - - result, errs = await RunCode.run_text("result = 1 / 0") - assert result == "" - assert "ZeroDivisionError" in errs - - -@pytest.mark.asyncio -async def test_run_script(): - # Successful command - out, err = await RunCode.run_script(".", command=["echo", "Hello World"]) - assert out.strip() == "Hello World" - assert err == "" - - # Unsuccessful command - out, err = await RunCode.run_script(".", command=["python", "-c", "print(1/0)"]) - assert "ZeroDivisionError" in err - - -@pytest.mark.asyncio -async def test_run(): - action = RunCode() - result = await action.run(mode="text", code="print('Hello, World')") - assert "PASS" in result - - result = await action.run( - mode="script", - code="echo 'Hello World'", - code_file_name="", - test_code="", - test_file_name="", - command=["echo", "Hello World"], - working_directory=".", - additional_python_paths=[], - ) - assert "PASS" in result - - -@pytest.mark.asyncio -async def test_run_failure(): - action = RunCode() - result = await action.run(mode="text", code="result = 1 / 0") - assert "FAIL" in result - - result = await action.run( - mode="script", - code='python -c "print(1/0)"', - code_file_name="", - test_code="", - test_file_name="", - command=["python", "-c", "print(1/0)"], - working_directory=".", - additional_python_paths=[], - ) - assert "FAIL" in result diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lectora Inspire 17.1.5 Build 11381 Patch Download TOP.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lectora Inspire 17.1.5 Build 11381 Patch Download TOP.md deleted file mode 100644 index 72c90702964fa0b9cdf30cdee0466afd561e928b..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lectora Inspire 17.1.5 Build 11381 Patch Download TOP.md +++ /dev/null @@ -1,27 +0,0 @@ -
            -

            How to Download and Install Lectora Inspire 17.1.5 Build 11381 Patch

            -

            Lectora Inspire is a powerful e-learning authoring tool that allows you to create interactive and engaging courses for various platforms and devices. Lectora Inspire 17.1.5 is the latest version of the software that includes new features and updates such as:

            -
              -
            • Seamless Play publish option: This option enables your course to smoothly flow from page to page, eliminating the screen wipe commonly associated with HTML pages.
            • -
            • Auto-play media on mobile devices: When publishing using the Seamless Play option, media files will honor the Auto-Start selection on mobile devices, allowing simplified use of page narration and videos.
            • -
            • BranchTrack scenario-based simulations: You can easily create, import, and edit scenario-based exercises using the BranchTrack application. You can also track the learner's selections during the exercise and use the available score.
            • -
            • Anchor the position of your objects: You can specify whether the object will maintain its location on the page within the view, even when the view is scrolled.
            • -
            • SVG rendering of shapes and buttons: Using scalable vector graphics technology, published content will remain crisp and scalable on high-resolution displays.
            • -
            • Camtasia 9 and Snagit 13: These are the latest versions of the screen recording and image editing tools that are integrated with Lectora Inspire.
            • -
            -

            If you want to download and install Lectora Inspire 17.1.5 Build 11381 Patch, you can follow these steps:

            -

            Lectora Inspire 17.1.5 Build 11381 Patch Download


            Download Ziphttps://cinurl.com/2uEY4j



            -
              -
            1. Go to Lectora Desktop version 17 Release Notes and Downloads [^1^] and click on the link for Lectora Inspire 17.1.5 Build 11381 Patch Download.
            2. -
            3. Save the file to your computer and run it as an administrator.
            4. -
            5. Follow the instructions on the screen to complete the installation process.
            6. -
            7. Restart Lectora Inspire and enjoy the new features and updates.
            8. -
            -

            Note: If you have previously installed Lectora Inspire 17.0 or any of its subversions, you do not need to uninstall them before installing Lectora Inspire 17.1.5 Build 11381 Patch. The patch will automatically update your existing installation.

            - -

            Lectora Inspire 17.1.5 Build 11381 Patch is compatible with Windows 7, 8, 8.1, and 10 operating systems. It also supports HTML5, SCORM, AICC, xAPI, and cmi5 standards for publishing your courses. You can also publish your courses to Lectora Online, ReviewLink, CourseMill, or any other LMS that supports these standards.

            -

            Lectora Inspire 17.1.5 Build 11381 Patch also comes with a user manual that provides detailed instructions and examples on how to use the software and its features. You can access the user manual from the Help menu in Lectora Inspire or from the Lectora Desktop version 17 Release Notes and Downloads page.

            -

            -

            If you have any questions or issues with Lectora Inspire 17.1.5 Build 11381 Patch, you can contact the ELB Learning support team via email, phone, or chat. You can also visit the Trivantis Community to get help from other Lectora users and experts.

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/suqionglin/White-box-Cartoonization/wbc/network.py b/spaces/suqionglin/White-box-Cartoonization/wbc/network.py deleted file mode 100644 index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000 --- a/spaces/suqionglin/White-box-Cartoonization/wbc/network.py +++ /dev/null @@ -1,62 +0,0 @@ -import tensorflow as tf -import numpy as np -import tensorflow.contrib.slim as slim - - - -def resblock(inputs, out_channel=32, name='resblock'): - - with tf.variable_scope(name): - - x = slim.convolution2d(inputs, out_channel, [3, 3], - activation_fn=None, scope='conv1') - x = tf.nn.leaky_relu(x) - x = slim.convolution2d(x, out_channel, [3, 3], - activation_fn=None, scope='conv2') - - return x + inputs - - - - -def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False): - with tf.variable_scope(name, reuse=reuse): - - x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None) - x0 = tf.nn.leaky_relu(x0) - - x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - - x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - for idx in range(num_blocks): - x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx)) - - x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2] - x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2)) - x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - - h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2] - x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2)) - x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None) - x4 = tf.nn.leaky_relu(x4) - x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None) - - return x4 - -if __name__ == '__main__': - - - pass \ No newline at end of file diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Any Dvd Cloner Platinum Crack Download [TOP].md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Any Dvd Cloner Platinum Crack Download [TOP].md deleted file mode 100644 index da1f0a4b581fbdc76b30afed2c33f7c1004934d7..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Any Dvd Cloner Platinum Crack Download [TOP].md +++ /dev/null @@ -1,6 +0,0 @@ -

            any dvd cloner platinum crack download


            DOWNLOAD 🌟 https://urluss.com/2uCF1W



            -
            -DVD-Cloner Platinum 2020 v17.60.1460 x64 x86 + Crack ... which can decrypt and copy a DVD to any blank disc with diverse copy modes for ... You can also download: DVD-Cloner 2020 v17.60 Build 1460 x64 x86 + Keygen ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/t13718236382/bingoGPT4/src/lib/isomorphic/index.ts b/spaces/t13718236382/bingoGPT4/src/lib/isomorphic/index.ts deleted file mode 100644 index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,17 +0,0 @@ -'use client' - -import Default from './browser' - -let exportsModel: any = {} - -if (process.browser) { - Object.assign(exportsModel, require('./browser').default) -} else { - Object.assign(exportsModel, require('./node').default) -} - -export default exportsModel! as typeof Default - -export const fetch: typeof Default.fetch = exportsModel!.fetch -export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket -export const debug: typeof Default.debug = exportsModel!.debug diff --git a/spaces/team-indain-image-caption/Hindi-image-captioning/README.md b/spaces/team-indain-image-caption/Hindi-image-captioning/README.md deleted file mode 100644 index e14ae80131b3703a259ee6f820609541da62f655..0000000000000000000000000000000000000000 --- a/spaces/team-indain-image-caption/Hindi-image-captioning/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Hindi Image Captioning -emoji: 🌍 -colorFrom: red -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/teragron/TinyStories/model.py b/spaces/teragron/TinyStories/model.py deleted file mode 100644 index 9e4ce22088f94ed16edd1894482abe1efedc93b8..0000000000000000000000000000000000000000 --- a/spaces/teragron/TinyStories/model.py +++ /dev/null @@ -1,343 +0,0 @@ -import math -import struct -import inspect -from dataclasses import dataclass -from typing import Any, Optional, Tuple - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - -@dataclass -class ModelArgs: - # default hyperparameters for the Llama 7B model - dim: int = 4096 - n_layers: int = 32 - n_heads: int = 32 - n_kv_heads: Optional[int] = None - vocab_size: int = 32000 - hidden_dim: Optional[int] = None - multiple_of: int = 256 # MLP hidden layer size will be multiple of - norm_eps: float = 1e-5 - max_seq_len: int = 2048 - dropout: float = 0.0 - - -class RMSNorm(torch.nn.Module): - def __init__(self, dim: int, eps: float): - super().__init__() - self.eps = eps - self.weight = nn.Parameter(torch.ones(dim)) - - def _norm(self, x): - return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps) - - def forward(self, x): - output = self._norm(x.float()).type_as(x) - return output * self.weight - - -def precompute_freqs_cis(dim: int, end: int, theta: float = 10000.0): - freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim)) - t = torch.arange(end, device=freqs.device) # type: ignore - freqs = torch.outer(t, freqs).float() # type: ignore - freqs_cos = torch.cos(freqs) # real part - freqs_sin = torch.sin(freqs) # imaginary part - return freqs_cos, freqs_sin - -def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor): - ndim = x.ndim - assert 0 <= 1 < ndim - assert freqs_cis.shape == (x.shape[1], x.shape[-1]) - shape = [d if i == 1 or i == ndim - 1 else 1 for i, d in enumerate(x.shape)] - return freqs_cis.view(shape) - -def apply_rotary_emb( - xq: torch.Tensor, - xk: torch.Tensor, - freqs_cos: torch.Tensor, - freqs_sin: torch.Tensor -) -> Tuple[torch.Tensor, torch.Tensor]: - - # reshape xq and xk to match the complex representation - xq_r, xq_i = xq.float().reshape(xq.shape[:-1] + (-1, 2)).unbind(-1) - xk_r, xk_i = xk.float().reshape(xk.shape[:-1] + (-1, 2)).unbind(-1) - - # reshape freqs_cos and freqs_sin for broadcasting - freqs_cos = reshape_for_broadcast(freqs_cos, xq_r) - freqs_sin = reshape_for_broadcast(freqs_sin, xq_r) - - # apply rotation using real numbers - xq_out_r = xq_r * freqs_cos - xq_i * freqs_sin - xq_out_i = xq_r * freqs_sin + xq_i * freqs_cos - xk_out_r = xk_r * freqs_cos - xk_i * freqs_sin - xk_out_i = xk_r * freqs_sin + xk_i * freqs_cos - - # flatten last two dimensions - xq_out = torch.stack([xq_out_r, xq_out_i], dim=-1).flatten(3) - xk_out = torch.stack([xk_out_r, xk_out_i], dim=-1).flatten(3) - - return xq_out.type_as(xq), xk_out.type_as(xk) - -def repeat_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep)""" - bs, slen, n_kv_heads, head_dim = x.shape - if n_rep == 1: - return x - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - -class Attention(nn.Module): - def __init__(self, args: ModelArgs): - super().__init__() - self.n_kv_heads = args.n_heads if args.n_kv_heads is None else args.n_kv_heads - assert args.n_heads % self.n_kv_heads == 0 - model_parallel_size = 1 - self.n_local_heads = args.n_heads // model_parallel_size - self.n_local_kv_heads = self.n_kv_heads // model_parallel_size - self.n_rep = self.n_local_heads // self.n_local_kv_heads - self.head_dim = args.dim // args.n_heads - self.wq = nn.Linear(args.dim, args.n_heads * self.head_dim, bias=False) - self.wk = nn.Linear(args.dim, self.n_kv_heads * self.head_dim, bias=False) - self.wv = nn.Linear(args.dim, self.n_kv_heads * self.head_dim, bias=False) - self.wo = nn.Linear(args.n_heads * self.head_dim, args.dim, bias=False) - self.attn_dropout = nn.Dropout(args.dropout) - self.resid_dropout = nn.Dropout(args.dropout) - self.dropout = args.dropout - - # use flash attention or a manual implementation? - self.flash = hasattr(torch.nn.functional, 'scaled_dot_product_attention') - if not self.flash: - print("WARNING: using slow attention. Flash Attention requires PyTorch >= 2.0") - mask = torch.full((1, 1, args.max_seq_len, args.max_seq_len), float("-inf")) - mask = torch.triu(mask, diagonal=1) - self.register_buffer("mask", mask) - - def forward( - self, - x: torch.Tensor, - freqs_cos: torch.Tensor, - freqs_sin: torch.Tensor, - ): - bsz, seqlen, _ = x.shape - - # QKV - xq, xk, xv = self.wq(x), self.wk(x), self.wv(x) - xq = xq.view(bsz, seqlen, self.n_local_heads, self.head_dim) - xk = xk.view(bsz, seqlen, self.n_local_kv_heads, self.head_dim) - xv = xv.view(bsz, seqlen, self.n_local_kv_heads, self.head_dim) - - # RoPE relative positional embeddings - xq, xk = apply_rotary_emb(xq, xk, freqs_cos, freqs_sin) - - # grouped multiquery attention: expand out keys and values - xk = repeat_kv(xk, self.n_rep) # (bs, seqlen, n_local_heads, head_dim) - xv = repeat_kv(xv, self.n_rep) # (bs, seqlen, n_local_heads, head_dim) - - # make heads into a batch dimension - xq = xq.transpose(1, 2) # (bs, n_local_heads, seqlen, head_dim) - xk = xk.transpose(1, 2) - xv = xv.transpose(1, 2) - - # flash implementation - if self.flash: - output = torch.nn.functional.scaled_dot_product_attention(xq, xk, xv, attn_mask=None, dropout_p=self.dropout if self.training else 0.0, is_causal=True) - else: - # manual implementation - scores = torch.matmul(xq, xk.transpose(2, 3)) / math.sqrt(self.head_dim) - assert hasattr(self, 'mask') - scores = scores + self.mask[:, :, :seqlen, :seqlen] # (bs, n_local_heads, seqlen, cache_len + seqlen) - scores = F.softmax(scores.float(), dim=-1).type_as(xq) - scores = self.attn_dropout(scores) - output = torch.matmul(scores, xv) # (bs, n_local_heads, seqlen, head_dim) - - # restore time as batch dimension and concat heads - output = output.transpose(1, 2).contiguous().view(bsz, seqlen, -1) - - # final projection into the residual stream - output = self.wo(output) - output = self.resid_dropout(output) - return output - - -class FeedForward(nn.Module): - def __init__(self, dim: int, hidden_dim: int, multiple_of: int, dropout: float): - super().__init__() - if hidden_dim is None: - hidden_dim = 4 * dim - hidden_dim = int(2 * hidden_dim / 3) - hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of) - self.w1 = nn.Linear(dim, hidden_dim, bias=False) - self.w2 = nn.Linear(hidden_dim, dim, bias=False) - self.w3 = nn.Linear(dim, hidden_dim, bias=False) - self.dropout = nn.Dropout(dropout) - - def forward(self, x): - return self.dropout(self.w2(F.silu(self.w1(x)) * self.w3(x))) - - -class TransformerBlock(nn.Module): - def __init__(self, layer_id: int, args: ModelArgs): - super().__init__() - self.n_heads = args.n_heads - self.dim = args.dim - self.head_dim = args.dim // args.n_heads - self.attention = Attention(args) - self.feed_forward = FeedForward( - dim=args.dim, - hidden_dim=args.hidden_dim, - multiple_of=args.multiple_of, - dropout=args.dropout, - ) - self.layer_id = layer_id - self.attention_norm = RMSNorm(args.dim, eps=args.norm_eps) - self.ffn_norm = RMSNorm(args.dim, eps=args.norm_eps) - - def forward(self, x, freqs_cos, freqs_sin): - h = x + self.attention.forward(self.attention_norm(x), freqs_cos, freqs_sin) - out = h + self.feed_forward.forward(self.ffn_norm(h)) - return out - - -class Transformer(nn.Module): - last_loss: Optional[torch.Tensor] - - def __init__(self, params: ModelArgs): - super().__init__() - self.params = params - self.vocab_size = params.vocab_size - self.n_layers = params.n_layers - - self.tok_embeddings = nn.Embedding(params.vocab_size, params.dim) - self.dropout = nn.Dropout(params.dropout) - self.layers = torch.nn.ModuleList() - for layer_id in range(params.n_layers): - self.layers.append(TransformerBlock(layer_id, params)) - self.norm = RMSNorm(params.dim, eps=params.norm_eps) - self.output = nn.Linear(params.dim, params.vocab_size, bias=False) - - # share the unembedding parameters with the embedding parameters - self.tok_embeddings.weight = self.output.weight # https://paperswithcode.com/method/weight-tying - - # some useful precompute for the RoPE relative positional embeddings - freqs_cos, freqs_sin = precompute_freqs_cis(self.params.dim // self.params.n_heads, self.params.max_seq_len) - self.register_buffer("freqs_cos", freqs_cos, persistent=False) - self.register_buffer("freqs_sin", freqs_sin, persistent=False) - - # init all weights - self.apply(self._init_weights) - # apply special scaled init to the residual projections, per GPT-2 paper - for pn, p in self.named_parameters(): - if pn.endswith('w3.weight') or pn.endswith('wo.weight'): - torch.nn.init.normal_(p, mean=0.0, std=0.02/math.sqrt(2 * params.n_layers)) - - # Initialize attribute for the loss of the last forward call. This will be set if the forward is called with a targets tensor. - self.last_loss = None - - def _init_weights(self, module): - if isinstance(module, nn.Linear): - torch.nn.init.normal_(module.weight, mean=0.0, std=0.02) - if module.bias is not None: - torch.nn.init.zeros_(module.bias) - elif isinstance(module, nn.Embedding): - torch.nn.init.normal_(module.weight, mean=0.0, std=0.02) - - def forward(self, tokens: torch.Tensor, targets: Optional[torch.Tensor] = None) -> torch.Tensor: - _bsz, seqlen = tokens.shape - h = self.tok_embeddings(tokens) - h = self.dropout(h) - freqs_cos = self.freqs_cos[:seqlen] - freqs_sin = self.freqs_sin[:seqlen] - - for layer in self.layers: - h = layer(h, freqs_cos, freqs_sin) - h = self.norm(h) - - if targets is not None: - # if we are given some desired targets also calculate the loss - logits = self.output(h) - self.last_loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), ignore_index=-1) - else: - # inference-time mini-optimization: only forward the output on the very last position - logits = self.output(h[:, [-1], :]) # note: using list [-1] to preserve the time dim - self.last_loss = None - - return logits - - def configure_optimizers(self, weight_decay, learning_rate, betas, device_type): - # start with all of the candidate parameters - param_dict = {pn: p for pn, p in self.named_parameters()} - # filter out those that do not require grad - param_dict = {pn: p for pn, p in param_dict.items() if p.requires_grad} - # create optim groups. Any parameters that is 2D will be weight decayed, otherwise no. - # i.e. all weight tensors in matmuls + embeddings decay, all biases and layernorms don't. - decay_params = [p for n, p in param_dict.items() if p.dim() >= 2] - nodecay_params = [p for n, p in param_dict.items() if p.dim() < 2] - optim_groups = [ - {'params': decay_params, 'weight_decay': weight_decay}, - {'params': nodecay_params, 'weight_decay': 0.0} - ] - num_decay_params = sum(p.numel() for p in decay_params) - num_nodecay_params = sum(p.numel() for p in nodecay_params) - print(f"num decayed parameter tensors: {len(decay_params)}, with {num_decay_params:,} parameters") - print(f"num non-decayed parameter tensors: {len(nodecay_params)}, with {num_nodecay_params:,} parameters") - # Create AdamW optimizer and use the fused version if it is available - fused_available = 'fused' in inspect.signature(torch.optim.AdamW).parameters - use_fused = fused_available and device_type == 'cuda' - extra_args = dict(fused=True) if use_fused else dict() - optimizer = torch.optim.AdamW(optim_groups, lr=learning_rate, betas=betas, **extra_args) - print(f"using fused AdamW: {use_fused}") - - return optimizer - - def estimate_mfu(self, fwdbwd_per_iter, dt): - """ estimate model flops utilization (MFU) in units of A100 bfloat16 peak FLOPS """ - # first estimate the number of flops we do per iteration. - # see PaLM paper Appendix B as ref: https://arxiv.org/abs/2204.02311 - N = sum(p.numel() for p in self.parameters()) - cfg = self.params - L, H, Q, T = cfg.n_layers, cfg.n_heads, cfg.dim//cfg.n_heads, cfg.max_seq_len - flops_per_token = 6*N + 12*L*H*Q*T - flops_per_fwdbwd = flops_per_token * T - flops_per_iter = flops_per_fwdbwd * fwdbwd_per_iter - # express our flops throughput as ratio of A100 bfloat16 peak flops - flops_achieved = flops_per_iter * (1.0/dt) # per second - flops_promised = 312e12 # A100 GPU bfloat16 peak flops is 312 TFLOPS - mfu = flops_achieved / flops_promised - return mfu - - @torch.inference_mode() - def generate(self, idx, max_new_tokens, temperature=1.0, top_k=None): - """ - Take a conditioning sequence of indices idx (LongTensor of shape (b,t)) and complete - the sequence max_new_tokens times, feeding the predictions back into the model each time. - Most likely you'll want to make sure to be in model.eval() mode of operation for this. - Also note this is a super inefficient version of sampling with no key/value cache. - """ - for _ in range(max_new_tokens): - # if the sequence context is growing too long we must crop it at block_size - idx_cond = idx if idx.size(1) <= self.params.max_seq_len else idx[:, -self.params.max_seq_len:] - # forward the model to get the logits for the index in the sequence - logits = self(idx_cond) - logits = logits[:, -1, :] # crop to just the final time step - if temperature == 0.0: - # "sample" the single most likely index - _, idx_next = torch.topk(logits, k=1, dim=-1) - else: - # pluck the logits at the final step and scale by desired temperature - logits = logits / temperature - # optionally crop the logits to only the top k options - if top_k is not None: - v, _ = torch.topk(logits, min(top_k, logits.size(-1))) - logits[logits < v[:, [-1]]] = -float('Inf') - # apply softmax to convert logits to (normalized) probabilities - probs = F.softmax(logits, dim=-1) - idx_next = torch.multinomial(probs, num_samples=1) - # append sampled index to the running sequence and continue - idx = torch.cat((idx, idx_next), dim=1) - - return idx diff --git a/spaces/terfces0erbo/CollegeProjectV2/Charles Poliquin Winning The Arms Race Pdf Pdf Checked ((EXCLUSIVE)).md b/spaces/terfces0erbo/CollegeProjectV2/Charles Poliquin Winning The Arms Race Pdf Pdf Checked ((EXCLUSIVE)).md deleted file mode 100644 index d76a992c049b84403b45919ba0344108494014e2..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Charles Poliquin Winning The Arms Race Pdf Pdf Checked ((EXCLUSIVE)).md +++ /dev/null @@ -1,9 +0,0 @@ -
            -

            This is a brief, to the point summary of the Charles Poliquin book- The Invisible Wing . Its worth reading, and you can find out more online. The book contains an audio CD of the former drug-addict Charles Poliquin discussing his experience as an athlete, going through his exercise rehab, and his coaching philosophies.

            -

            Charles Poliquin Winning The Arms Race Pdf Pdf Checked


            DOWNLOAD ✯✯✯ https://bytlly.com/2uGkCt



            -

            When Charles Poliquin was asked if he was strong or weak, he answered that he was neither strong nor weak, only strong enough. But he added that his strength was relative, as when a person is weak they are strong, and when a person is strong they are weak. Strength is not a one dimensional subject. The only way to see the full potential of a muscle and the exercise it can create is through total fatigue, which is best achieved through eccentric exercise, which is essentially the inability of a muscle or muscle group to contract. When the muscle or muscle group is unable to contract, the eccentric muscles are developed. This is the phase of the eccentric movement that will be remembered.

            -

            Like the rest of us, Charles Poliquin was helped and influenced by many, so his own list of influences is long. However, his book The Invisible Wing is an absolute must read for any strength athlete. It chronicles the trials, tribulations, and coaching philosophy of Charles Poliquin. Its the story of a man and his career, and it is truly inspiring.

            -

            Charles Poliquin was an addict and alcoholic, when he decided that he was going to get himself back on track, he sought out a new life. Charles Poliquin was 20 years old and about to leave the U.S, with nothing, he climbed into a small bus in his size 14 shoes, headed to Winnipeg. He was unknown, homeless, and penniless.

            -

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Comfort Software Hot Alarm Clock 3.1.0.0 Portable.md b/spaces/terfces0erbo/CollegeProjectV2/Comfort Software Hot Alarm Clock 3.1.0.0 Portable.md deleted file mode 100644 index 46cd85693576f469490ed1dfa44fec4844b798cc..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Comfort Software Hot Alarm Clock 3.1.0.0 Portable.md +++ /dev/null @@ -1,10 +0,0 @@ -
            -

            If the program is installed and configured correctly you should see the options listed at the top of the software in this image: If you don’t then you need to restart the program (restarting the laptop can do this but might be a little more elegant to restart by powering off and back on the laptop).

            -

            Comfort Software Hot Alarm Clock 3.1.0.0 Portable


            DOWNLOAD --->>> https://bytlly.com/2uGlW9



            -

            The controls tab will allow you to change the settings for the Alarm Clock, the clock face, and the alarm settings. The time zone settings will allow you to set your time and date according to your geographical location.

            -

            Everyday your clock will log your daily activities, not only amassing miles for you, but also becoming a support system, so that you can get in touch with me. I can include a number of different features that will be added to the program on a monthly basis. And also, I’m adding new, nice themes for you to enjoy. Any user can change my name for you to their own name in no time.

            -

            Micra Design Suite.Elegant.21.Portable.No.LINUX.Full.Crack + MICRA DESIGN.S6.FINAL.x86.ALL.EVC.EXE is a professional suite of software programs that are designed to meet your graphic, video, printing, scanning, and photo needs. And it comes with all the tools you need to quickly produce stunning photos, perfect graphic designs, print perfection, and high-resolution prints with great

            -

            eBook Reader for Mac is a free eBook management, synchronization, and delivery software for macOS. Various eBook reader software, ePub format files, Amazon EPUB format books, eReader format books, eBooks, EPUB/MOBI/AZW3 format books can be managed. The user can transfer

            -

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Infotech English For Computer Users 4th Edition Key Answers.md b/spaces/terfces0erbo/CollegeProjectV2/Infotech English For Computer Users 4th Edition Key Answers.md deleted file mode 100644 index 9dc914899e7c5bce95f0b13bc131880b7bca2736..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Infotech English For Computer Users 4th Edition Key Answers.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Infotech English for computer users 4th edition Key Answers


            Download –––––>>> https://bytlly.com/2uGkZo



            -
            -Название: Infotech. English for computer users, 4th edition: Teacher book Автор: Esteras S. Издательство: Cambridge Год выпуска: 2009 Страниц: 162 ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/thov/medicalSegmentation/UNET_perso.py b/spaces/thov/medicalSegmentation/UNET_perso.py deleted file mode 100644 index df80f4e1b080f38ef283b6af4a93859f688f6f97..0000000000000000000000000000000000000000 --- a/spaces/thov/medicalSegmentation/UNET_perso.py +++ /dev/null @@ -1,75 +0,0 @@ -import torch -import torch.nn as nn -import torchvision.transforms.functional as TF - - -#Aladdinpersson/Machine-Learning-Collection GIT - -""" -Defining a UNet block -in_channels: image dimension -""" -class DoubleConv(nn.Module): - def __init__(self, in_channels, out_channels): - super(DoubleConv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(in_channels, out_channels, 3, 1, 1, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU(inplace=True), - nn.Conv2d(out_channels, out_channels, 3, 1, 1, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU(inplace=True), - ) - - def forward(self, x): - return self.conv(x) - -class UNET(nn.Module): - def __init__( - self, in_channels=3, out_channels=4, features=[64, 128, 256, 512]): - super(UNET, self).__init__() - self.ups = nn.ModuleList() - self.downs = nn.ModuleList() - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - - # Down part of UNET - for feature in features: - self.downs.append(DoubleConv(in_channels, feature)) - in_channels = feature - - # Up part of UNET - for feature in reversed(features): - self.ups.append( - nn.ConvTranspose2d( - feature*2, feature, kernel_size=2, stride=2, - ) - ) - self.ups.append(DoubleConv(feature*2, feature)) - - #layer between down part and up part - self.bottleneck = DoubleConv(features[-1], features[-1]*2) - self.final_conv = nn.Conv2d(features[0], out_channels, kernel_size=1) - - def forward(self, x): - skip_connections = [] - - for down in self.downs: - x = down(x) - skip_connections.append(x) - x = self.pool(x) - - x = self.bottleneck(x) - skip_connections = skip_connections[::-1] - - for idx in range(0, len(self.ups), 2): - x = self.ups[idx](x) - skip_connection = skip_connections[idx//2] - - #Double check if input size is not divisible by 2, we need to be sure that the two shapes are similar - if x.shape != skip_connection.shape: - x = TF.resize(x, size=skip_connection.shape[2:]) - - concat_skip = torch.cat((skip_connection, x), dim=1) - x = self.ups[idx+1](concat_skip) - - return self.final_conv(x) \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable[by Robert] .rar Tips and Tricks for Professional Sound.md b/spaces/tialenAdioni/chat-gpt-api/logs/Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable[by Robert] .rar Tips and Tricks for Professional Sound.md deleted file mode 100644 index 9af02635d0f7fc214fe9c088cdda791c30e6918f..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable[by Robert] .rar Tips and Tricks for Professional Sound.md +++ /dev/null @@ -1,183 +0,0 @@ - -

            Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar: A Comprehensive Review

            -

            If you are looking for a professional audio workstation that can help you mix, edit, and create audio content with ease and efficiency, you might want to check out Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar.

            -

            Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar


            Download Zip ---> https://urlcod.com/2uK374



            -

            This is a portable version of Adobe Audition CC 2015.2, a powerful audio software that is designed to accelerate video production workflows and audio finishing, and deliver a polished mix with pristine sound.

            -

            In this article, we will review this file in detail, covering its features, benefits, installation process, usage tips, comparison with other software, and more.

            -

            By the end of this article, you will have a clear idea of what Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar can do for you and how you can get started with it right away.

            -

            What is Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar?

            -

            Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar is a compressed file that contains a portable version of Adobe Audition CC 2015.2, which is a software application that allows you to record, edit, mix, master, and restore audio.

            -

            Adobe Audition CC 2015.2 portable preactivated torrent download
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) sound editing program
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) crack patch keygen
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) free download full version
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) multitrack audio editor
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) by Robert rar file
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) portable no installation
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) sound remover feature
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) preview editor tool
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) advanced sound design
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) roundtrip editing workflow
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) automatic speech alignment
            -Adobe Audition CC 2015.2 v9.2.0.191 (64-Bit) enhanced spectral display
            -Adobe Audition CC 2015 portable for Windows 10
            -Adobe Audition CC 2015 portable for Windows 7
            -Adobe Audition CC 2015 portable for Windows XP
            -Adobe Audition CC 2015 portable for Mac OS X
            -Adobe Audition CC 2015 portable for Linux
            -Adobe Audition CC 2015 portable for Android
            -Adobe Audition CC 2015 portable for iOS
            -How to use Adobe Audition CC 2015 portable
            -How to install Adobe Audition CC 2015 portable
            -How to activate Adobe Audition CC 2015 portable
            -How to update Adobe Audition CC 2015 portable
            -How to uninstall Adobe Audition CC 2015 portable
            -How to fix errors in Adobe Audition CC 2015 portable
            -How to optimize performance in Adobe Audition CC 2015 portable
            -How to record audio with Adobe Audition CC 2015 portable
            -How to edit audio with Adobe Audition CC 2015 portable
            -How to create audio content with Adobe Audition CC 2015 portable
            -How to mix and master audio with Adobe Audition CC 2015 portable
            -How to export audio with Adobe Audition CC 2015 portable
            -How to import audio with Adobe Audition CC 2015 portable
            -How to add effects and plugins with Adobe Audition CC 2015 portable
            -How to use noise reduction with Adobe Audition CC 2015 portable
            -How to use pitch correction with Adobe Audition CC 2015 portable
            -How to use time stretching with Adobe Audition CC 2015 portable
            -How to use spectral editing with Adobe Audition CC 2015 portable
            -How to use multitrack editing with Adobe Audition CC 2015 portable
            -How to use sound design tools with Adobe Audition CC 2015 portable
            -How to use sound remover with Adobe Audition CC 2015 portable
            -How to use preview editor with Adobe Audition CC 2015 portable
            -How to use roundtrip editing with Adobe Audition CC 2015 portable
            -How to use automatic speech alignment with Adobe Audition CC 2015 portable
            -How to use waveform display with Adobe Audition CC 2015 portable
            -How to use keyboard shortcuts with Adobe Audition CC 2015 portable
            -How to use dock panels with Adobe Audition CC 2015 portable
            -How to customize preferences with Adobe Audition CC 2015 portable
            -How to troubleshoot problems with Adobe Audition CC 2015 portable

            -

            A brief introduction to Adobe Audition CC

            -

            Adobe Audition CC is a part of the Adobe Creative Cloud suite, which is a collection of software tools for creative professionals.

            -

            Adobe Audition CC was originally developed by Syntrillium Software as Cool Edit Pro, a popular audio editing software that was acquired by Adobe Systems in 2003.

            -

            Since then, Adobe has improved and expanded the features and capabilities of Adobe Audition CC, making it one of the most widely used audio software in the industry.

            -

            Some of the main features of Adobe Audition CC include:

            -
              -
            • Multitrack recording and editing
            • -
            • Waveform editing and spectral display
            • -
            • Audio effects and plugins
            • -
            • Noise reduction and restoration
            • -
            • Loudness metering and correction
            • -
            • Surround sound mixing and encoding
            • -
            • Batch processing and scripting
            • -
            • Integration with other Adobe applications
            • -
            -

            The features and benefits of the portable version by Robert

            -

            The portable version of Adobe Audition CC 2015.2 by Robert is a modified version that does not require installation or activation.

            -

            This means that you can run it from any removable device, such as a USB flash drive or an external hard drive, without affecting your system registry or leaving any traces behind.

            -

            Some of the advantages of using the portable version are:

            -
              -
            • You can use it on any computer without installing anything
            • -
            • You can save space on your hard drive by storing it on a removable device
            • -
            • You can avoid compatibility issues with other software or updates
            • -
            • You can bypass any restrictions or limitations imposed by your system administrator or network
            • -
            • You can keep your personal settings and preferences intact
            • -
            -

            How to download and install the file

            -

            To download the file, you need to find a reliable source that offers it for free or for a reasonable price.

            -

            One such source is SolidTorrents, which is a torrent search engine that indexes various torrent sites.

            -

            To download the file from SolidTorrents, you need to follow these steps:

            -
              -
            1. Go to https://solidtorrents.to/torrents/adobe-audition-cc-2015-2-v9-2-1-x64-incl-patch-por-cf912/5c463c6a29dd4319e4e4f135/
            2. -
            3. Click on the "Torrent Download" button or the "Magnet Download" button
            4. -
            5. If you have a torrent client installed on your computer, such as uTorrent or BitTorrent, it will automatically open and start downloading the file
            6. -
            7. If you don't have a torrent client installed on your computer, you need to download one first from their official websites or from other sources
            8. -
            9. Once the file is downloaded, you need to extract it using a program that can handle RAR files, such as WinRAR or 7-Zip
            10. -
            11. You will get two files: Adobe Audition CC 2015.2 v9.2.1 (x64) Incl Patch + Portable.exe and Adobe Audition CC 2015.2 v9.2.1 (x64) Incl Patch + Portable.nfo
            12. -
            13. The first file is the executable file that contains the portable version of Adobe Audition CC 2015.2
            14. -
            15. The second file is a text file that contains some information about the file and its creator
            16. -
            17. To run the portable version of Adobe Audition CC 2015.2, you just need to double-click on the executable file or right-click on it and select "Run as administrator"
            18. -
            19. You will see a splash screen with the logo of Adobe Audition CC 2015.2 and then the main interface of the software will appear
            20. -
            21. You can now use Adobe Audition CC 2015.2 as usual without installing anything on your computer
            22. -
            -

            Why use Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar?

            -

            Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar is a great choice for anyone who wants to work with audio in a professional and efficient way.

            -

            The advantages of using Adobe Audition CC for audio editing and production

            -

            Adobe Audition CC is a versatile and powerful audio software that can handle any kind of audio project, from simple recordings to complex mixes.

            -

            Some of the benefits of using Adobe Audition CC are:

            -
              -
            • You can record high-quality audio with multiple inputs and outputs
            • -
            • You can edit audio with precision and flexibility using tools like cut, copy, paste, trim, fade, stretch, pitch shift, time warp, etc.
            • -
            • You can mix audio with ease and control using tools like volume envelopes, automation lanes, effects racks, buses, sends, returns, etc.
            • -
            • You can apply various effects and plugins to enhance your audio quality and creativity using tools like EQs, compressors, limiters, reverbs, delays, flangers, phasers, choruses, distortions, modulators, filters, etc.
            • -
            • You can reduce noise and to work with multitrack or single-track audio
            • -
            • Import your audio files by clicking on File > Import > File or dragging and dropping them from your file explorer
            • -
            • Arrange your audio clips on the timeline by dragging and dropping them on the desired tracks
            • -
            • Edit your audio clips by using the tools on the toolbar, such as the Selection Tool, the Razor Tool, the Time Selection Tool, etc.
            • -
            • Mix your audio clips by adjusting their volume levels, panning, effects, automation, etc. using the tools on the mixer panel
            • -
            • Apply effects and plugins to your audio clips by clicking on Effects > Audio Effects or Effects > Audio Plug-in Manager and choosing the desired effect or plugin
            • -
            • Export your audio project by clicking on File > Export > Multitrack Mixdown or File > Export > File and choosing the desired format and settings
            • -
          -

          Some tips for using Adobe Audition CC more effectively are:

          -
            -
          • Use keyboard shortcuts to perform common tasks faster and easier. You can view and customize the keyboard shortcuts by clicking on Edit > Keyboard Shortcuts
          • -
          • Use the Essential Sound panel to quickly adjust the sound quality and loudness of your audio clips according to their type (dialogue, music, sound effects, ambience)
          • -
          • Use the Remix feature to automatically adjust the duration of a music track to fit your video or podcast without affecting its quality or musicality
          • -
          • Use the Synthesized Speech feature to generate realistic speech from text using various languages and voices
          • -
          • Use the Automatic Loudness Correction feature to automatically adjust the loudness of your audio project to meet the broadcast standards and streaming services requirements
          • -
          • Use the Dynamic Link feature to stream video content from Premiere Pro CC without rendering or exporting
          • -
          -

          The best practices and tricks for enhancing audio quality and performance

          -

          To enhance your audio quality and performance with Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar, you need to follow some best practices and tricks that can make a difference in your final output.

          -

          Some of these best practices and tricks are:

          -
            -
          • Record your audio with a good microphone and a quiet environment to minimize noise and distortion
          • -
          • Use a pop filter or a windscreen to reduce plosives and breath sounds
          • -
          • Use headphones or monitor speakers to listen to your audio while recording and editing
          • -
          • Normalize your audio clips to a consistent level before mixing them
          • -
          • Use EQs, compressors, limiters, reverbs, delays, etc. sparingly and subtly to enhance your audio without overdoing it
          • -
          • Use noise reduction and restoration tools like Sound Remover, DeClicker, DeClipper, DeHummer, etc. to remove unwanted sounds like hums, clicks, clips, hisses, etc.
          • -
          • Use spectral editing tools like Spectral Frequency Display, Spectral Pan Display, Spectral Phase Display, etc. to visualize and edit your audio in frequency domain
          • -
          • Use batch processing and scripting tools like Batch Process Panel, Favorites Panel, Scripts Panel, etc. to apply effects and commands to multiple files at once
          • -
          • Use multitrack templates and presets to save time and effort when creating similar projects
          • -
          • Use markers and metadata to organize and annotate your audio files and projects
          • -
          -

          The common problems and solutions for using Adobe Audition CC

          -

          As with any software application, Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar can sometimes encounter some problems or errors that can affect its performance or functionality.

          -

          Some of these common problems and solutions are:

          - - - - - - - -
          ProblemSolution
          The portable version of Adobe Audition CC does not run or crashes frequently.- Make sure you have enough free space on your removable device.
          - Make sure you have the latest version of Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar.
          - Make sure you run the portable version as an administrator.
          - Make sure you have the latest drivers for your audio device.
          - Make sure you have no conflicting software or plugins installed on your computer.
          - Try running the portable version in compatibility mode for Windows 7 or 8.
          The portable version of Adobe Audition CC does not recognize my audio device or has poor sound quality.- Make sure you have selected the correct audio device in Edit > Preferences > Audio Hardware.
          - Make sure you have adjusted the buffer size and sample rate in Edit > Preferences > Audio Hardware.
          - Make sure you have enabled ASIO drivers for your audio device if available.
          - Make sure you have disabled any enhancements or effects on your audio device in Windows Sound Settings.
          - Try using a different USB port or cable for your removable device.
          The portable version of Adobe Audition CC does not integrate with other Adobe applications or Creative Cloud services.- Make sure you have installed other Adobe applications or Creative Cloud services on your computer.
          - Make sure you have signed in with your Adobe ID in Help > Sign In.
          - Make sure you have enabled Dynamic Link in Edit > Preferences > Media & Disk Cache.
          - Make sure you have updated other Adobe applications or Creative Cloud services to their latest versions.
          The portable version of Adobe Audition CC does not support some file formats or codecs.- Make sure you have installed the required codecs or plugins for the file formats you want to import or export.
          - Make sure you have selected the correct file format and settings in File > Import > File or File > Export > File.
          - Try converting your file formats to a more compatible format like WAV or MP3 using another software application.
          The portable version of Adobe Audition CC does not save my settings or preferences.- Make sure you have enough free space on your removable device.
          - Make sure you have not changed the location or name of the portable version folder.
          - Make sure you have not deleted or modified any files inside the portable version folder.
          - Try copying the portable version folder to another removable device or computer.
          -

          Conclusion

          -

          In conclusion, Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar is a powerful and versatile audio editing software that can help you create, mix, edit, and restore audio for various purposes.

          -

          It has a user-friendly interface and workflow that can be customized to your preferences.

          -

          It has a comprehensive and professional toolset for audio post-production and restoration.

          -

          It has a seamless integration with other Adobe applications and Creative Cloud services.

          -

          It has a large and active community of users and experts who provide support and feedback.

          -

          However, it also has some drawbacks that you should consider before using it.

          -

          It lacks MIDI support and virtual instruments for music production.

          -

          It is only available via an expensive monthly subscription that requires an internet connection.

          -

          It can be overkill for simple or casual audio editing tasks.

          -

          It can have compatibility issues with some third-party plugins and hardware devices.

          -

          If you are looking for a professional audio workstation that can handle any kind of audio project, you might want to give Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar a try.

          -

          It is a portable version of Adobe Audition CC 2015.2 that does not require installation or activation.

          -

          You can run it from any removable device, such as a USB flash drive or an external hard drive, without affecting your system registry or leaving any traces behind.

          -

          You can download it from SolidTorrents or other reliable sources and extract it using a program that can handle RAR files.

          -

          You can then run it by double-clicking on the executable file or right-clicking on it and selecting "Run as administrator".

          -

          You can then use it as usual without installing anything on your computer.

          -

          FAQs

          -

          Here are some frequently asked questions and answers about Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar:

          -
            -
          1. Q: Is Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar safe to use?
            A: Yes, as long as you download it from a trustworthy source and scan it with an antivirus program before running it.
          2. -
          3. Q: Is Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar legal to use?
            A: It depends on your local laws and regulations. Adobe does not endorse or support the portable version of its software, and it may violate its terms of service and license agreement. Use it at your own risk and discretion.
          4. -
          5. Q: Can I use Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar on multiple computers?
            A: Yes, you can use it on any computer that meets the system requirements and has a compatible audio device.
          6. -
          7. Q: Can I update Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar to the latest version?
            A: No, you cannot update the portable version of Adobe Audition CC 2015.2 to the latest version. You need to download the latest portable version from another source or subscribe to the official version from Adobe.
          8. -
          9. Q: Can I use my existing plugins and presets with Adobe Audition CC 2015.2 V9.2.0.191 (64-Bit)Portable,[by Robert] .rar?
            A: Yes, you can use your existing plugins and presets with the portable version of Adobe Audition CC 2015.2, as long as they are compatible with the software and stored in the same folder as the executable file.
          10. -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Autodesk 3ds Max 2014 _BEST_ Keygen Xforce.md b/spaces/tialenAdioni/chat-gpt-api/logs/Autodesk 3ds Max 2014 _BEST_ Keygen Xforce.md deleted file mode 100644 index f5b8fc72117c249761d9f1dbb89ab5e992e42ea0..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Autodesk 3ds Max 2014 _BEST_ Keygen Xforce.md +++ /dev/null @@ -1,33 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Autodesk 3ds Max 2014 Keygen Xforce": - -

          How to Activate Autodesk 3ds Max 2014 with Xforce Keygen

          -

          Autodesk 3ds Max 2014 is a powerful software for creating 3D animations, models, games and images. It offers a variety of tools and features to help you design and visualize your projects. However, to use this software, you need to activate it with a valid serial number and product key.

          -

          One way to activate Autodesk 3ds Max 2014 is to use Xforce Keygen, a program that generates activation codes for various Autodesk products. Xforce Keygen can be downloaded from various websites, such as [^1^] or [^2^]. However, you should be careful when downloading and using Xforce Keygen, as it may contain viruses or malware that can harm your computer. You should also only use Xforce Keygen for educational purposes and not for commercial use.

          -

          Autodesk 3ds Max 2014 Keygen Xforce


          Download Ziphttps://urlcod.com/2uKaGs



          -

          To activate Autodesk 3ds Max 2014 with Xforce Keygen, you need to follow these steps:

          -
            -
          1. Finish the installation and restart Autodesk 3ds Max 2014.
          2. -
          3. Before clicking on Activate, make sure to disable your internet connection and antivirus software.
          4. -
          5. Click on Activate and if it tells you that your serial is wrong, simply click on Close and click on Activate again.
          6. -
          7. Select I have an activation code from Autodesk.
          8. -
          9. Start Xforce Keygen 32-bits or 64-bits version depending on your system.
          10. -
          11. Click on Patch (you should see successfully patched).
          12. -
          13. Copy the request code and paste it into the keygen and press Generate.
          14. -
          15. Now copy the activation code, go back to the activation screen and paste the code.
          16. -
          17. Click Next. You have a fully registered Autodesk product.
          18. -
          -

          Congratulations! You have successfully activated Autodesk 3ds Max 2014 with Xforce Keygen. You can now enjoy using this software for your creative projects.

          Here are a few more paragraphs for the article: - -

          Autodesk 3ds Max 2014 also offers a range of new features and enhancements for 3D modeling, texturing, rendering, animation and effects. Some of the highlights include:

          -
            -
          • New Retopology tools that automatically optimize the geometry of high-resolution models to create a clean, quad-based mesh topology.
          • -
          • New Smart Extrude feature that extrudes faces on 3D objects in an intuitive and flexible way, rebuilding and stitching adjacent faces automatically.
          • -
          • New Perspective Match option that facilitates the process of matching 3D geometry to a photographic plate, generating perspective guidelines and locking the registration of 3D and 2D elements.
          • -
          • New support for Open Shading Language (OSL), a scripting language for creating procedural textures and shaders that can be used with any supported renderer.
          • -
          • New integration of mParticles into the existing Particle Flow toolset, enabling particles to interact with other dynamics simulations using the MassFX system.
          • -
          • New enhancements to the Nitrous viewport, such as adaptive degradation options, improved depth of field and DirectX 11 rendering support.
          • -
          -

          With these new features and more, Autodesk 3ds Max 2014 provides a comprehensive and powerful solution for creating stunning 3D content for games, visual effects, design visualization and more.

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/De Las Vacas Sagradas Se Hacen Las Mejores Hamburguesas By Dr David Brandt Robert Kriegelpdf Fix.md b/spaces/tialenAdioni/chat-gpt-api/logs/De Las Vacas Sagradas Se Hacen Las Mejores Hamburguesas By Dr David Brandt Robert Kriegelpdf Fix.md deleted file mode 100644 index 0a2525eaa8d19d85f63650e668f1cedeef0865b3..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/De Las Vacas Sagradas Se Hacen Las Mejores Hamburguesas By Dr David Brandt Robert Kriegelpdf Fix.md +++ /dev/null @@ -1,22 +0,0 @@ - -

          How to Turn Sacred Cows into Great Burgers: A Book Review

          -

          Have you ever wondered how to challenge the status quo and unleash your creativity in your work and life? If so, you might want to read De las vacas sagradas se hacen las mejores hamburguesas (Sacred Cows Make the Best Burgers) by Dr. David Brandt and Robert Kriegel, a bestselling book that offers practical advice on how to overcome resistance to change and innovation.

          -

          De Las Vacas Sagradas Se Hacen Las Mejores Hamburguesas By Dr David Brandt Robert Kriegelpdf


          DOWNLOAD >>>>> https://urlcod.com/2uK606



          -

          The authors use the metaphor of sacred cows to describe the outdated beliefs, practices, and habits that hold us back from achieving our full potential. They argue that sacred cows are everywhere: in our organizations, our industries, our cultures, and even our own minds. They prevent us from adapting to changing circumstances, taking risks, and exploring new possibilities.

          -

          However, sacred cows can also be turned into great burgers: delicious opportunities for growth, improvement, and success. The book provides a step-by-step guide on how to identify, challenge, and eliminate sacred cows in various domains of our lives. It also offers examples of individuals and companies that have successfully transformed their sacred cows into burgers, such as Apple, Nike, Starbucks, and Southwest Airlines.

          -

          Some of the key lessons from the book are:

          -
            -
          • Don't let fear of failure stop you from trying new things. Failure is inevitable and necessary for learning and innovation.
          • -
          • Don't let complacency make you settle for less than you deserve. Complacency is the enemy of excellence and growth.
          • -
          • Don't let tradition limit your vision. Tradition is not a reason to do something; it's a result of doing something.
          • -
          • Don't let conformity stifle your individuality. Conformity is the death of creativity and diversity.
          • -
          • Don't let bureaucracy slow you down. Bureaucracy is a waste of time, energy, and resources.
          • -
          -

          If you are looking for a book that will inspire you to think differently, act boldly, and create value, De las vacas sagradas se hacen las mejores hamburguesas is a great choice. It will help you turn your sacred cows into great burgers and enjoy the taste of success.

          -

          - -

          The book is divided into four parts: Part One explains what sacred cows are and how they affect us; Part Two describes the seven steps to turn sacred cows into burgers; Part Three explores the specific sacred cows in different areas of our lives, such as work, relationships, health, and money; and Part Four provides some tools and tips to keep sacred cows away and maintain a burger mindset.

          -

          The book is written in a clear, engaging, and humorous style, with anecdotes, quotes, exercises, and checklists to illustrate the points and help the readers apply them. The book is also available in Spanish, with the title De las vacas sagradas se hacen las mejores hamburguesas, which literally means "From the sacred cows, the best burgers are made". The book has been praised by critics and readers alike as a refreshing and inspiring guide to personal and professional transformation.

          -

          If you are interested in reading this book, you can find it online or in your local bookstore. You can also visit the authors' website at www.kriegel.com to learn more about their work and other books. You can also watch a video of Robert Kriegel talking about sacred cows and burgers at https://www.youtube.com/watch?v=5yLZL4jwZ8Q.

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dil Se Full Movie With English Subtitles Download Of 42.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dil Se Full Movie With English Subtitles Download Of 42.md deleted file mode 100644 index fec0198aa51e3134a2d7db69a07fa5c5c8ea96fa..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Dil Se Full Movie With English Subtitles Download Of 42.md +++ /dev/null @@ -1,18 +0,0 @@ - -

          How to Watch Dil Se, a Romantic Thriller Starring Shah Rukh Khan, with English Subtitles

          -

          Dil Se is a 1998 Hindi movie directed by Mani Ratnam and starring Shah Rukh Khan, Manisha Koirala and Preity Zinta. It is the third installment of Ratnam's trilogy of films that depict the political and social issues of India. The film explores the themes of love, terrorism and patriotism through the story of Amar, a journalist who falls in love with a mysterious woman named Meghna.

          -

          If you want to watch Dil Se with English subtitles, you have several options to choose from. Here are some of them:

          -

          Dil Se Full Movie With English Subtitles Download Of 42


          Download Filehttps://urlcod.com/2uK4a4



          -
            -
          • You can download the movie from various online platforms that offer subtitles in different languages. For example, you can find Dil Se on opensubtitles.com, where you can choose from 38 subtitles in English[^2^]. You can also download the movie from Microsoft Sway, where you can find a link to download Dil Se with English subtitles[^4^]. However, be careful when downloading files from unknown sources, as they may contain viruses or malware.
          • -
          • You can stream the movie online from various websites that offer subtitles in different languages. For example, you can watch Dil Se on YouTube, where you can find a full episode of Dil Se with English subtitles[^3^]. You can also watch the movie on SoundCloud, where you can listen to an audiobook version of Dil Se with English subtitles[^5^]. However, be aware that some websites may have low-quality videos or audio, or may not have the complete movie.
          • -
          • You can buy or rent the movie from various online platforms that offer subtitles in different languages. For example, you can find Dil Se on Amazon, where you can buy or rent the DVD or Blu-ray version of the movie with English subtitles. You can also find the movie on Netflix, where you can stream the movie with English subtitles if you have a subscription. However, be prepared to pay a fee for these services, and check the availability of the movie in your region.
          • -
          -

          Whichever option you choose, we hope you enjoy watching Dil Se, a captivating movie that showcases the talent of Shah Rukh Khan and Mani Ratnam.

          - -

          Dil Se is not only a romantic movie, but also a political one. It depicts the conflict between the Indian government and the separatist groups in the northeastern states, especially Assam. The movie shows the different perspectives of the characters on the issue of national unity and identity. Amar, who represents the mainstream Indian society, is unable to understand the grievances and motivations of the rebels, who are seen as terrorists by the state. Meghna, who is a member of one such group, is driven by a personal trauma and a sense of revenge against the state that has oppressed her people. Preeti, who is a modern and educated woman, is oblivious to the realities of the violence and turmoil in the region.

          -

          Dil Se is also a musical movie, with a soundtrack composed by A. R. Rahman, who won his third Filmfare Award for Best Music Director for this film. The songs are not only catchy and melodious, but also convey the emotions and moods of the characters and the situations. The songs are also integrated into the narrative, rather than being separate musical numbers. For example, the song "Chaiyya Chaiyya", which features Shah Rukh Khan and Malaika Arora dancing on top of a moving train, is used to show Amar's fascination with Meghna and his pursuit of her. The song "Dil Se Re", which is sung by Rahman himself, is used to show Amar's confession of his love for Meghna and his desperation to win her over.

          -

          Dil Se is a movie that challenges the conventions of Bollywood cinema, both in terms of its content and its form. It is an example of parallel cinema, which is a movement of Indian films that focus on realistic and socially relevant themes, rather than on escapist and formulaic entertainment. It is also noted for its nonlinear storytelling, which uses flashbacks, flash-forwards, dream sequences and symbolism to create a complex and layered narrative. The movie also uses innovative cinematography by Santosh Sivan, who won a National Film Award for Best Cinematography for this film. The movie uses various techniques such as long shots, close-ups, slow motion, fast motion, freeze frames and color filters to create stunning visuals and enhance the mood and atmosphere of the film.

          -

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Fast And Furious 8 (English) Download Where to Find the Best Links and Torrents for the Thrilling Movie.md b/spaces/tialenAdioni/chat-gpt-api/logs/Fast And Furious 8 (English) Download Where to Find the Best Links and Torrents for the Thrilling Movie.md deleted file mode 100644 index 6c3e6673314cac4b39ede14f1471a466263e0275..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Fast And Furious 8 (English) Download Where to Find the Best Links and Torrents for the Thrilling Movie.md +++ /dev/null @@ -1,125 +0,0 @@ - -

          Fast and Furious 8 (English) Download: How to Watch the Action-Packed Sequel Online

          -

          If you are a fan of high-octane action films, you might be interested in watching Fast and Furious 8, also known as The Fate of the Furious. It is the eighth installment in the popular Fast & Furious franchise, which follows a group of street racers turned international spies. In this film, Dominic Toretto (Vin Diesel), the leader of the team, is seduced by a mysterious cyberterrorist named Cipher (Charlize Theron) and turns against his family. The rest of the crew must team up with their former enemy Deckard Shaw (Jason Statham) to stop Dom and Cipher from unleashing chaos on the world.

          -

          The film was directed by F. Gary Gray and written by Chris Morgan. It was released in April 2017 and became one of the highest-grossing films of that year, earning over $1.2 billion worldwide. It received generally positive reviews from critics and audiences, who praised its performances, action sequences, humor, and soundtrack. The film also features a star-studded cast that includes Dwayne Johnson, Michelle Rodriguez, Tyrese Gibson, Ludacris, Scott Eastwood, Nathalie Emmanuel, Kurt Russell, Elsa Pataky, Kristofer Hivju, and Luke Evans.

          -

          Fast And Furious 8 (English) Download


          Download Ziphttps://urlcod.com/2uKa8y



          -

          In this article, we will provide you with some information on how to download and watch Fast and Furious 8 online. Whether you want to rent or buy the film legally and safely, or stream it online for free legally, we have got you covered. Read on to find out more.

          -

          Why You Should Watch Fast and Furious 8

          -

          Fast and Furious 8 is a film that delivers what it promises: a fast-paced, adrenaline-fueled, action-packed adventure that will keep you on the edge of your seat. Here are some of the reasons why you should watch it:

          -
            -
          • It has received positive reviews from critics and audiences alike. On Rotten Tomatoes, it has a rating of 67% based on 316 reviews, with an average score of 6/10. The site's consensus reads: "The Fate of the Furious opens a new chapter in the franchise, fueled by the same infectious cast chemistry and over-the-top action fans have come to expect." On IMDb, it has a rating of 6.6/10 based on over 240,000 votes.
          • -
          • It features thrilling action sequences that will blow your mind. From car chases to shootouts to explosions, Fast and Furious 8 has it all. Some of the highlights include a race through Havana's streets, a prison riot, a submarine chase in Russia's frozen tundra, and a showdown in New York City involving hundreds of hacked cars.
          • -
          • It showcases exotic locations around the world. The film takes you on a global tour that spans Cuba, Germany, New York City, Russia, and more. You will get to see stunning scenery, diverse cultures, and iconic landmarks.
          • -
          • It boasts a star-studded cast that has great chemistry and charisma. The film reunites most of the original cast members who have become like a family over the years. They deliver witty dialogue, hilarious banter, and emotional moments. The film also introduces some new faces who add more flavor and intrigue to the story. Charlize Theron is menacing as Cipher, while Jason Statham steals every scene he is in as Deckard Shaw.
          • -
          • It continues the legacy of the Fast & Furious franchise. The film pays tribute to Paul Walker, who played Brian O'Conner in previous films but died in a car accident in 2013. It also sets up future installments that will explore more stories and characters in this universe.
          • -
          -

          How to Download Fast and Furious 8 Legally and Safely

          -

          If you want to download Fast and Furious 8 legally and safely, you need to use a reputable platform or service that offers it for rent or purchase. This way, you can avoid any potential viruses, malware, or legal issues that might come with using illegal or pirated sites.

          -

          Here are some of the best options for downloading Fast and Furious 8 legally and safely:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          Platform/ServicePriceFeatures/Benefits
          Amazon Prime Video$3.99 for rent (HD), $9.99 for purchase (HD)- Accessible on various devices such as smartphones, tablets, computers, smart TVs, and streaming devices
          - Offers offline viewing option
          - Provides subtitles, audio descriptions, and X-Ray feature for more information about actors, scenes, and trivia
          iTunes$3.99 for rent (HD), $9.99 for purchase (HD)- Compatible with Apple devices such as iPhones, iPads, Macs, Apple TVs, and HomePods
          - Supports offline viewing option
          - Includes extras such as deleted scenes, behind-the-scenes footage, and commentary
          Google Play Movies & TV$3.99 for rent (HD), $9.99 for purchase (HD)- Compatible with Android devices such as smartphones, tablets, Chromebooks, and Chromecasts
          - Allows offline viewing option
          - Supports subtitles, closed captions, and audio tracks in different languages
          Vudu$3.99 for rent (HD), $9.99 for purchase (HD)- Available on various devices such as smartphones, tablets, computers, smart TVs, and streaming devices
          - Offers offline viewing option
          - Provides extras such as deleted scenes, behind-the-scenes footage, and commentary
          - Supports Dolby Atmos sound quality
          FandangoNOW$3.99 for rent (HD), $9.99 for purchase (HD)- Accessible on various devices such as smartphones, tablets, computers, smart TVs, and streaming devices
          - Allows offline viewing option
          - Includes extras such as deleted scenes, behind-the-scenes footage, and commentary
          - Supports Dolby Vision HDR quality
          Microsoft Store$3.99 for rent (HD), $9.99 for purchase (HD)- Compatible with Windows devices such as PCs, laptops, tablets, and Xbox consoles
          - Supports offline viewing option
          - Provides subtitles, closed captions, and audio tracks in different languages
          - Includes extras such as deleted scenes, behind-the-scenes footage, and commentary
          YouTube Movies$3.99 for rent (HD), $9.99 for purchase (HD)- Compatible with various devices such as smartphones, tablets, computers, smart TVs, and streaming devices
          - Allows offline viewing option
          - Supports subtitles, closed captions, and audio tracks in different languages
          - Includes extras such as deleted scenes,
          behind-the-scenes footage,
          and commentary

          How to Stream Fast and Furious 8 Online for Free

          If you want to stream Fast and Furious 8 online for free legally, you need to use a reputable streaming platform or service that offers it for free. This way, you can avoid any potential viruses, malware, or legal issues that might come with using illegal or pirated sites. Here are some of the best options for streaming Fast and Furious 8 online for free legally: - HBO Max: This is a premium streaming service that offers a vast library of movies, shows, and originals. It also includes Fast and Furious 8 as part of its catalog. You can watch it online for free if you have an existing HBO subscription through your cable or satellite provider. Alternatively, you can sign up for a 7-day free trial and cancel anytime before it expires. The service costs $14.99 per month after the trial period. - Peacock: This is a streaming service from NBCUniversal that offers a mix of live and on-demand content. It also features Fast and Furious 8 as part of its collection. You can watch it online for free with ads if you sign up for the Peacock Free plan, which gives you access to thousands of hours of movies and shows. You can also upgrade to the Peacock Premium plan for $4.99 per month or the Peacock Premium Plus plan for $9.99 per month, which offer more content and fewer or no ads. - Tubi: This is a free streaming service that offers thousands of movies and shows across various genres. It also has Fast and Furious 8 in its lineup. You can watch it online for free with ads if you register for an account, which is also free. You can access Tubi on various devices such as smartphones, tablets, computers, smart TVs, and streaming devices.

          Conclusion

          -

          Fast and Furious 8 is a film that will appeal to fans of action, adventure, and cars. It is the eighth installment in the Fast & Furious franchise, which follows a group of street racers turned international spies. In this film, Dominic Toretto is coerced by a cyberterrorist named Cipher to betray his team and join her in a global scheme.

          -

          In this article, we have provided you with some information on how to download and watch Fast and Furious 8 online. You can choose to rent or buy the film legally and safely from reputable platforms or services that offer it for a reasonable price. You can also stream the film online for free legally from streaming platforms or services that offer it as part of their catalog or free trials.

          -

          Fast And Furious 8 full movie download in English
          -How to download Fast And Furious 8 in English for free
          -Fast And Furious 8 English subtitles download
          -Fast And Furious 8 English dubbed download
          -Fast And Furious 8 English audio track download
          -Download Fast And Furious 8 in English HD quality
          -Fast And Furious 8 torrent download in English
          -Fast And Furious 8 English version download link
          -Watch Fast And Furious 8 online in English
          -Fast And Furious 8 English movie download sites
          -Fast And Furious 8 English movie download mp4
          -Fast And Furious 8 English movie download 720p
          -Fast And Furious 8 English movie download filmywap
          -Fast And Furious 8 English movie download filmyzilla
          -Fast And Furious 8 English movie download worldfree4u
          -Fast And Furious 8 English movie download pagalworld
          -Fast And Furious 8 English movie download khatrimaza
          -Fast And Furious 8 English movie download bolly4u
          -Fast And Furious 8 English movie download moviesflix
          -Fast And Furious 8 English movie download movierulz
          -Fast And Furious 8 English movie download tamilrockers
          -Fast And Furious 8 English movie download isaimini
          -Fast And Furious 8 English movie download tamilyogi
          -Fast And Furious 8 English movie download jio rockers
          -Fast And Furious 8 English movie download telegram
          -Download Fast And Furious 8 in English with Hindi subtitles
          -Download Fast And Furious 8 in English with Tamil subtitles
          -Download Fast And Furious 8 in English with Telugu subtitles
          -Download Fast And Furious 8 in English with Malayalam subtitles
          -Download Fast And Furious 8 in English with Kannada subtitles
          -Download Fast And Furious 8 in English with Bengali subtitles
          -Download Fast And Furious 8 in English with Marathi subtitles
          -Download Fast And Furious 8 in English with Gujarati subtitles
          -Download Fast And Furious 8 in English with Punjabi subtitles
          -Download Fast And Furious 8 in English with Urdu subtitles
          -Download Fast And Furious 8 in English with Arabic subtitles
          -Download Fast And Furious 8 in English with French subtitles
          -Download Fast And Furious 8 in English with Spanish subtitles
          -Download Fast And Furious 8 in English with German subtitles
          -Download Fast And Furious 8 in English with Italian subtitles
          -Download Fast And Furious 8 in English with Portuguese subtitles
          -Download Fast And Furious 8 in English with Russian subtitles
          -Download Fast And Furious 8 in English with Chinese subtitles
          -Download Fast And Furious 8 in English with Japanese subtitles
          -Download Fast And Furious 8 in English with Korean subtitles
          -Download Fast And Furious 8 in English with Thai subtitles
          -Download Fast And Furious 8 in English with Indonesian subtitles
          -Download Fast And Furious 8 in English with Vietnamese subtitles

          -

          We hope you have enjoyed reading this article and found it useful. If you have any questions or comments, please feel free to share them below. And if you are ready to watch Fast and Furious 8 online, buckle up and enjoy the ride!

          -

          FAQs

          -
            -
          • Q: How long is Fast and Furious 8?
          • -
          • A: The theatrical version of Fast and Furious 8 is 136 minutes long, while the extended director's cut is 149 minutes long.
          • -
          • Q: Who dies in Fast and Furious 8?
          • -
          • A: The main characters who die in Fast and Furious 8 are Elena Neves (Elsa Pataky), who is killed by Cipher to force Dom to cooperate with her, and Rhodes (Kristofer Hivju), who is killed by Deckard Shaw in revenge for shooting Owen Shaw (Luke Evans).
          • -
          • Q: Is Paul Walker in Fast and Furious 8?
          • -
          • A: No, Paul Walker, who played Brian O'Conner in previous films, is not in Fast and Furious 8. He died in a car accident in November 2013 before the release of Furious 7. His character is mentioned briefly in Fast and Furious 8 as being retired and living happily with Mia (Jordana Brewster) and their children.
          • -
          • Q: Is there a post-credits scene in Fast and Furious 8?
          • -
          • A: No, there is no post-credits scene in Fast and Furious 8. However, there is a mid-credits scene that shows Deckard Shaw visiting his brother Owen Shaw in the hospital and bringing him a gift from Dom: Cipher's location.
          • -
          • Q: What is the next film in the Fast & Furious franchise?
          • -
          • A: The next film in the Fast & Furious franchise is F9: The Fast Saga, which was released in June 2021. It is the ninth installment in the main series and follows Dom and his team as they face a new threat: Dom's estranged brother Jakob (John Cena), who is working with Cipher.
          • -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Castle Story Full Version and Explore a Voxel World.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Castle Story Full Version and Explore a Voxel World.md deleted file mode 100644 index 1ecf69439ee4d08df9bf77b67bc86be192c04986..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Castle Story Full Version and Explore a Voxel World.md +++ /dev/null @@ -1,106 +0,0 @@ - -

          Castle Story Download Free Full Version: A Guide for Beginners

          -

          If you are looking for a game that lets you build and defend your own castles, explore and reshape a voxel-based world, and challenge your friends or cooperate with them in multiplayer modes, then you might want to check out Castle Story. In this article, we will show you how to download Castle Story for free, what are the system requirements for running the game, and what are some tips and tricks for playing it.

          -

          What is Castle Story?

          -

          Castle Story is a creative voxel-based strategy game developed by Sauropod Studio, an indie game developer based in Montreal, Canada. The game was funded through a crowdfunding campaign on Kickstarter in 2012, and was fully released in August 2017 for Linux, macOS, and Windows.

          -

          castle story download free full version


          Download File ===> https://bltlly.com/2uOsAO



          -

          A creative voxel-based strategy game

          -

          In Castle Story, you are in control of workers called Bricktrons, which can be directed to gather resources, build castles, and fight enemies. The aim is to build a castle that can withstand attacks from creatures and other players. The game takes place on massive floating islands, which you can reshape with your voxel-based tools. You can design and build any structure you can imagine, from mighty and legendary castles to sprawling Bricktron villages.

          -

          A multiplayer experience

          -

          Castle Story is also a multiplayer experience. You can challenge your friends to a round of Conquest or join forces with them to repel the enemies in the Co-Op Invasion game mode. Of course, you can also group up in sandbox mode and build together. The game supports online multiplayer with up to 16 players per server.

          -

          castle story game free download pc
          -castle story full version download for windows
          -castle story free download no steam
          -castle story download free mac
          -castle story full game free online
          -castle story free download latest version
          -castle story download full crack
          -castle story free download android
          -castle story full version free apk
          -castle story download free ios
          -castle story full game download utorrent
          -castle story free download mega
          -castle story download full mod
          -castle story free download linux
          -castle story full version free play
          -castle story download free gog
          -castle story full game download zip
          -castle story free download rar
          -castle story download full unlocked
          -castle story free download update
          -castle story download full patch
          -castle story free download nexus games[^1^]
          -castle story full game download skidrow
          -castle story free download igg games
          -castle story download full repack
          -castle story free download ocean of games
          -castle story full game download fitgirl
          -castle story free download codex
          -castle story download full torrent
          -castle story free download plaza
          -castle story full version download highly compressed
          -castle story free download direct link
          -castle story download full iso
          -castle story free download softonic
          -castle story full game download cpy
          -castle story free download pcgames88
          -castle story download full setup
          -castle story free download gametrex
          -castle story full version download steamunlocked
          -castle story free download worldofpcgames

          -

          A world editor tool

          -

          The maps in Castle Story are meticulously hand-crafted by the developers and they are giving you access to the same tool they use. You can create your own worlds with their awesome World Editor, share them with the community, and try out other player-made maps. The World Editor allows you to customize the terrain, the resources, the enemies, and the objectives of your maps.

          -

          How to download Castle Story for free?

          -

          There are several ways to download Castle Story for free. Here are some of them:

          -

          The official website

          -

          You can download Castle Story for free from the official website of Sauropod Studio. However, this option is only available for those who backed the game on Kickstarter or pre-ordered it before its release. If you are one of them, you can log in with your email address and password, and download the game from your account page.

          -

          The Steam platform

          -

          You can also download Castle Story for free from the Steam platform. However, this option requires you to have a Steam account and a valid key for the game. If you backed the game on Kickstarter or pre-ordered it before its release, you should have received a Steam key via email. If not, you can contact Sauropod Studio's support team and request one. Once you have your key, you can activate it on Steam and download the game from your library.

          -

          The YouTube tutorial

          -

          Another way to download Castle Story for free is to follow a YouTube tutorial that shows you how to do it. For example, there is a video

          by ItsMe Prince that shows you how to download Castle Story 0.9 for free. The video provides a link to the game file and a torrent file, as well as a link to uTorrent, a torrent downloader. The video also shows you how to install the game and run it with 100% gameplay proof. However, this option may not be legal or safe, as it may violate the game's terms of service or contain viruses or malware. Therefore, we do not recommend this option and advise you to download the game from official or trusted sources only.

          -

          What are the system requirements for Castle Story?

          -

          Before you download Castle Story for free, you should make sure that your computer meets the minimum or recommended system requirements for running the game. Here are the system requirements for Castle Story according to Steam:

          -

          Minimum requirements

          - - - - - - - - -
          OSWindows 7 SP1, Windows 8, Windows 10
          ProcessorIntel or AMD Dual-Core, 2.2 GHz+
          Memory6 GB RAM
          GraphicsnVidia GeForce 440 512MB, Radeon HD 4450 512MB, Intel HD 3000
          DirectXVersion 11
          Storage1700 MB available space
          Additional NotesPlaying on large, player-made maps might affect performance.
          -

          Recommended requirements

          - - - - - - - - -
          OSWindows 7 SP1, Windows 8, Windows 10
          ProcessorIntel or AMD Quad-Core, 2.8 GHz+
          Memory8 GB RAM
          GraphicsnVidia GeForce 660 1024MB or better, AMD Radeon HD 7790 1024MB or better
          DirectXVersion 11
          Storage1700 MB available space
          Additional NotesPlaying on large, player-made maps might affect performance.
          -

          What are some tips and tricks for playing Castle Story?

          -

          If you have downloaded Castle Story for free and want to enjoy the game to the fullest, here are some tips and tricks that might help you:

          -

          Learn the controls and mechanics

          -

          The first thing you should do is learn the basic controls and mechanics of the game. You can access the tutorial from the main menu, which will teach you how to move the camera, select and command your Bricktrons, gather resources, build structures, and fight enemies. You can also check the options menu for more advanced controls and settings. You should also familiarize yourself with the different types of Bricktrons, resources, structures, enemies, and game modes that are available in Castle Story.

          -

          Plan your castle design and defense

          -

          The next thing you should do is plan your castle design and defense. You should consider the location, size, shape, and style of your castle, as well as the materials and tools you will need to build it. You should also think about how to protect your castle from enemy attacks, such as placing walls, gates, towers, traps, catapults, and archers. You should also make sure that your castle has enough space for your Bricktrons to work and live comfortably.

          -

          Use the debug menu and sandbox mode

          -

          The last thing you should do is use the debug menu and sandbox mode to experiment with different features and settings of the game. You can access the debug menu by pressing F1 on your keyboard, which will allow you to spawn Bricktrons, resources, enemies, structures, and more. You can also access the sandbox mode from the main menu, which will let you play on any map with unlimited resources and no objectives or enemies. You can use these modes to test your castle design and defense, try out different strategies and tactics, or just have fun with the game.

          -

          Conclusion

          -

          Castle Story is a creative voxel-based strategy game that lets you build and defend your own castles, explore and reshape a voxel-based world, and challenge your friends or cooperate with them in multiplayer modes. You can download Castle Story for free from the official website, the Steam platform, or a YouTube tutorial, but you should make sure that your computer meets the system requirements and that you download the game from legal and safe sources. You should also learn the controls and mechanics of the game, plan your castle design and defense, and use the debug menu and sandbox mode to experiment with different features and settings of the game. We hope that this article has helped you with downloading and playing Castle Story for free. Have fun!

          -

          FAQs

          -

          Here are some frequently asked questions about Castle Story:

          -

          Q: Is Castle Story still in development?

          -

          A: Yes, Castle Story is still in development. The developers are constantly working on improving the game, adding new features, fixing bugs, and listening to feedback from the community. You can follow their progress on their official website, their Steam page, their Twitter account, or their Discord server.

          -

          Q: How can I support Castle Story?

          -

          A: You can support Castle Story by buying the game from the official website or the Steam platform, leaving a positive review or rating, sharing the game with your friends, or donating to the developers via PayPal or Patreon.

          -

          Q: How can I mod Castle Story?

          -

          A: You can mod Castle Story by using the World Editor tool to create your own maps, or by using the Lua scripting language to create your own game modes, scenarios, or mechanics. You can also download and install mods made by other players from the Steam Workshop or the official forums.

          -

          Q: How can I report a bug or a problem in Castle Story?

          -

          A: You can report a bug or a problem in Castle Story by using the in-game bug report tool, which can be accessed by pressing F2 on your keyboard. You can also report a bug or a problem on the official forums, the Steam discussions, or the Discord server.

          -

          Q: How can I contact Sauropod Studio?

          -

          A: You can contact Sauropod Studio by sending them an email at info@sauropodstudio.com, or by filling out their contact form on their official website. You can also follow them on their social media accounts, such as Facebook, Instagram, YouTube, or Twitch.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Arcade Pc Loader V1.4.md b/spaces/tioseFevbu/cartoon-converter/scripts/Arcade Pc Loader V1.4.md deleted file mode 100644 index 504b9f48869488f9772939ba21ef289e1228c680..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Arcade Pc Loader V1.4.md +++ /dev/null @@ -1,27 +0,0 @@ - -Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Arcade Pc Loader V1.4": - -

          How to Play Arcade Games on Your PC with ArcadePC Loader V1.4

          -

          If you are a fan of arcade games, you might have heard of ArcadePC Loader, a frontend that allows you to run and configure arcade-PC based games like Taito Type X/X+/X2 and other platforms like Examu (Arcana Hearts 3) and e-AMUSEMENT (Otomedius). ArcadePC Loader is a great tool that lets you enjoy arcade games on your PC with high resolution, custom settings, and various hacks.

          -

          In this article, we will show you how to download, install, and use ArcadePC Loader V1.4, the latest version of this frontend that supports many popular arcade games such as Super Street Fighter IV Arcade Edition, Samurai Shodown: Edge of Destiny, BlazBlue Continuum Shift, King of Fighters XII, and more. We will also give you some tips and tricks to optimize your gaming experience with ArcadePC Loader V1.4.

          -

          Arcade Pc Loader V1.4


          Downloadhttps://urlcod.com/2uHyQq



          -

          How to Download and Install ArcadePC Loader V1.4

          -

          The first step is to download ArcadePC Loader V1.4 from a reliable source. You can find it on various websites that offer arcade emulation software, such as EmuCR, Software Informer, or NewPcD. Make sure you download the correct file that matches your operating system (Windows XP/Vista/7/8/10).

          -

          Once you have downloaded the file, extract it to a folder of your choice using a program like WinRAR or 7-Zip. You should see a folder named "ArcadePC Loader" with several files and subfolders inside. This is the main folder of the frontend where you will store your arcade games and settings.

          -

          To install ArcadePC Loader V1.4, simply run the executable file named "Loader.exe" inside the main folder. You will see a graphical user interface (GUI) that shows a list of supported arcade games on the left side and some options on the right side. You can change the language of the GUI by clicking on the flag icon on the top right corner.

          -

          How to Add and Run Arcade Games with ArcadePC Loader V1.4

          -

          To add arcade games to ArcadePC Loader V1.4, you need to download the game files (ROMs) from the internet. You can find them on various websites that offer arcade ROMs, such as EmuParadise, RomHustler, or CoolROM. Make sure you download the correct files that match the arcade platform (Taito Type X/X+/X2 etc.) and the game title.

          -

          Once you have downloaded the game files, extract them to a subfolder inside the "Games" folder of ArcadePC Loader. The subfolder name should be the same as the game title. For example, if you want to add Super Street Fighter IV Arcade Edition, create a subfolder named "Super Street Fighter IV Arcade Edition" inside the "Games" folder and extract the game files there.

          -

          To run arcade games with ArcadePC Loader V1.4, simply select the game title from the list on the left side of the GUI and click on the "Run Game" button on the right side. The game will launch in full screen mode by default. You can exit the game by pressing Esc or Alt+F4 keys.

          -

          How to Configure and Optimize Arcade Games with ArcadePC Loader V1.4

          -

          One of the best features of ArcadePC Loader V1.4 is that it allows you to configure and optimize arcade games according to your preferences and system specifications. You can access these options by clicking on the "Config Game" button on the right side of the GUI after selecting a game title from the list.

          -

          Some of the options you can adjust are:

          -

          -
            -
          • Screen resolution: You can choose any screen resolution for your game from 640x480 to 1920x1080 pixels.
          • -
          • Freeplay: You can enable or disable freeplay mode for your game, which means unlimited credits and continues.
          • -
          • Hacks: You can enable or disable various hacks for your game, such as increasing the native resolution rendering, removing borders or watermarks, fixing graphics glitches, etc.
          • -
          • Sound: You can enable or disable sound effects and music for your game.
          • -
          • 7196e7f11a
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Chromapure Licensing Crack [2021].md b/spaces/tioseFevbu/cartoon-converter/scripts/Chromapure Licensing Crack [2021].md deleted file mode 100644 index e75fa252f10c1449df30b5bec591d9140ecaa45c..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Chromapure Licensing Crack [2021].md +++ /dev/null @@ -1,120 +0,0 @@ -
            -

            Chromapure Licensing Crack: What You Need to Know Before You Use It

            -

            If you are interested in calibrating your display for accurate colors and contrast, you might have heard of Chromapure software. This is a popular video calibration program that has support for various modules, meters, and devices. However, you might also be tempted to use a cracked version of Chromapure to save money or access premium features. But before you do that, you should be aware of the risks and consequences of using cracked software. In this article, we will explain what Chromapure software is, what software cracking is, what are the dangers of using cracked software, and how you can avoid using it.

            -

            Chromapure Licensing Crack


            Download Filehttps://urlcod.com/2uHx9v



            -

            What Is Chromapure Software and Why Do People Use It?

            -

            Chromapure is a video calibration software that helps users achieve accurate colors and contrast on their displays

            -

            Chromapure is a program that allows users to quickly bring any video display with adequate controls into conformity with industry standards. It is a video calibration program that has support for various modules such as Grayscale, Gamma, White Balance, Color Gamut, Color Management and for pre-calibrated PRO meters.

            -

            Video calibration is the process of measuring and adjusting the electronic systems of your video display to produce an accurate picture. An accurate picture is one that correctly reproduces what the producer/director intended. Video calibration can improve the quality and appearance of your display by enhancing details, colors, contrast, brightness, and sharpness.

            -

            Some people use cracked versions of Chromapure to avoid paying for the software license or to access premium features

            -

            Chromapure software is not free. Users need to purchase a license from the official website or authorized resellers to use it legally. The price of Chromapure depends on the version (Standard or Professional), the meter (Display 3 PRO Colorimeter or i1Pro Spectrophotometer), and the bundle (Software Only or Software/Meter Bundle). The prices range from $199 to $4,995.

            -

            Some people may

            Some people may not want to pay for the software license or may want to access premium features that are not available in the standard version. For example, the professional version of Chromapure offers advanced features such as Auto-Calibrate, 3D LUT, HDR, and Dolby Vision. These people may resort to using cracked versions of Chromapure that they can download from various websites or forums.

            -

            -

            What Is Software Cracking and How Does It Work?

            -

            Software cracking is the process of modifying or bypassing the protection mechanisms of a software to use it without authorization

            -

            Software cracking is the illegal practice of breaking the security features of a software that prevent unauthorized use or distribution. Software developers use various methods to protect their software from piracy, such as license keys, activation codes, encryption, digital signatures, or online verification. These methods are designed to ensure that only legitimate users can access the software and its features.

            -

            Software crackers are people who try to circumvent these protection mechanisms and create modified versions of the software that can be used without paying for a license or following the terms and conditions. Software crackers use various tools and techniques to analyze, reverse engineer, or manipulate the software code and generate valid license keys, patch executable files, or remove encryption keys. They then distribute the cracked software through websites, torrents, or peer-to-peer networks.

            -

            Crackers use various methods to generate valid license keys, patch executable files, or remove encryption keys

            -

            There are different types of software cracking methods depending on the type of protection mechanism used by the software. Some of the common methods are:

            -
              -
            • Keygen: A keygen is a program that generates valid license keys for a software. Crackers use algorithms or formulas to create keys that match the format and criteria of the original keys. Users can enter these keys into the software to activate it.
            • -
            • Patch: A patch is a program that modifies the executable file of a software to bypass or disable the protection mechanism. Crackers use hex editors or disassemblers to locate and change the code segments that are responsible for checking the license validity or performing online verification. Users can run the patch before or after installing the software to crack it.
            • -
            • Crack: A crack is a modified version of the executable file of a software that has been pre-patched by crackers. Users can replace the original executable file with the cracked one to use the software without any restrictions.
            • -
            • Loader: A loader is a program that runs in the background and intercepts the communication between the software and its protection mechanism. Crackers use debuggers or injectors to modify the memory or registry values that are used by the software to verify its license status. Users can run the loader before launching the software to crack it.
            • -
            • Nulled: A nulled software is a software that has been stripped of its protection mechanism by removing or altering its code. Crackers use decompilers or obfuscators to access and modify the source code of the software. Users can install and use the nulled software without any limitations.
            • -
            -

            In some cases, crackers may combine two or more methods to crack a software. For example, they may use a keygen and a patch together to generate a valid license key and modify the executable file.

            -

            What Are the Risks of Using Cracked Software?

            -

            Cracked software can contain malware that can infect your computer, steal your data, or download more malicious programs

            -

            One of the biggest risks of using cracked software is malware infection. Malware is any malicious program that can harm your computer or compromise your security. Some examples of malware are viruses, worms, trojans, ransomware, spyware, adware, rootkits, and bots.

            -

            Crackers may intentionally or unintentionally embed malware into their cracked software. They may do this for various reasons, such as making money from ads or ransom demands, stealing your personal information or financial data, hijacking your computer resources or network bandwidth, or spreading more malware to other computers.

            -

            When you download, install, or run cracked software, you may unknowingly expose your computer to malware infection. Malware can cause various problems for your computer and your privacy, such as slowing down your system performance, corrupting your files or programs, displaying unwanted pop-ups or ads, encrypting your data and demanding payment for decryption, monitoring your online activity or keystrokes, stealing your passwords or credit card numbers, accessing your webcam or microphone, or turning your computer into a botnet for cyberattacks.

            -

            Cracked software can expose you to legal issues, such as fines,

            Cracked software can expose you to legal issues, such as fines, lawsuits, or criminal charges for violating intellectual property rights

            -

            Another risk of using cracked software is legal trouble. Cracked software is illegal software that infringes the intellectual property rights of the software developers or owners. Intellectual property rights are the legal rights that protect the creations of the mind, such as inventions, designs, works of art, or trademarks. Software is considered a form of intellectual property that is protected by laws such as patents, copyrights, or trade secrets.

            -

            When you use cracked software, you are violating the intellectual property rights of the software developers or owners. You are also violating the terms and conditions of the software license agreement that you agreed to when you installed the software. These terms and conditions usually prohibit you from modifying, copying, distributing, or using the software without authorization.

            -

            By using cracked software, you may face legal consequences such as fines, lawsuits, or criminal charges. The penalties may vary depending on the jurisdiction, the type and extent of the infringement, and the damages caused by your actions. For example, in the United States, you may be fined up to $150,000 for each infringed work, sued for damages and attorney fees by the software developers or owners, or prosecuted for a federal crime with a maximum sentence of five years in prison and a $250,000 fine.

            -

            Cracked software can compromise the quality and functionality of the software, such as bugs, errors, or lack of updates

            -

            A third risk of using cracked software is poor performance and reliability. Cracked software can have various defects or limitations that can affect the quality and functionality of the software. For example, cracked software may have:

            -
              -
            • Bugs: Cracked software may have errors or glitches that can cause unexpected results or crashes. These bugs may be introduced by crackers during the cracking process or by malware that is embedded in the cracked software.
            • -
            • Errors: Cracked software may have compatibility or security issues that can prevent it from working properly with your system or other programs. These errors may be caused by missing or outdated components, corrupted files, or incompatible drivers.
            • -
            • Lack of updates: Cracked software may not receive updates or patches from the software developers or owners. These updates or patches are essential for fixing bugs, improving features, enhancing security, or adding new functionality. Without them, your cracked software may become obsolete, vulnerable, or incompatible over time.
            • -
            -

            These defects or limitations can reduce the quality and functionality of your video calibration software. You may not be able to achieve accurate colors and contrast on your display, or you may experience poor image quality, flickering, artifacts, or lagging. You may also miss out on new features or improvements that are available in the latest version of the software.

            -

            How Can You Avoid Using Cracked Software?

            -

            You can look for free or open-source alternatives that offer similar features and benefits as Chromapure

            -

            One way to avoid using cracked software is to look for free or open-source alternatives that offer similar features and benefits as Chromapure. Free software is software that is available at no cost to users. Open-source software is software that has its source code publicly available for anyone to inspect, modify, or distribute. Free and open-source software are usually developed by communities of volunteers who share a common interest or goal.

            -

            There are many free or open-source video calibration software that you can use to calibrate your display without paying for a license or risking malware infection. Some examples are:

            - - - - - - - - - - - - - - - - - - - - - - - - - - -
            NameDescriptionFeatures
            CalibrizeA free Windows program that helps you calibrate your monitor colors in three simple steps: adjust contrast/brightness/gamma settings; measure color characteristics; create a color profile.- Supports multiple monitors
            - Works with any graphics card
            - Compatible with most photo/video editing programs
            - Easy to use interface
            DisplayCALA free cross-platform program that uses ArgyllCMS (an open-source color management system) to create high-quality color profiles for your display.- Supports various calibration devices
            - Offers advanced options and settings
            - Provides detailed reports and graphs
            - Integrates with other programs such as madVR
            HCFR ColorimeterA free Windows program that allows you to measure and analyze various aspects of your display such as contrast ratio A free Windows program that allows you to measure and analyze various aspects of your display such as contrast ratio, gamma, color temperature, color gamut, and color accuracy.- Supports various calibration devices
            - Offers multiple test patterns and modes
            - Provides comprehensive data and charts
            - Allows custom calibration settings
            CalMANA free open-source program that is designed to calibrate displays using a spectrophotometer or a colorimeter.- Supports various calibration devices
            - Offers different calibration workflows and targets
            - Provides real-time feedback and analysis
            - Supports HDR and Dolby Vision
            -

            These are just some of the free or open-source video calibration software that you can try. You can search online for more options and compare their features and reviews. You can also check their documentation and support forums for more information and guidance.

            -

            You can purchase a legitimate license of Chromapure from the official website or authorized resellers

            -

            Another way to avoid using cracked software is to purchase a legitimate license of Chromapure from the official website or authorized resellers. This is the best way to ensure that you are using the original and authentic version of Chromapure that has all the features and benefits that you need. You will also be supporting the software developers and owners who have invested their time, money, and effort to create and maintain the software.

            -

            Purchasing a legitimate license of Chromapure will also give you access to updates, patches, support, and warranty. You will be able to download the latest version of the software that has bug fixes, improvements, or new functionality. You will also be able to contact the customer service or technical support team if you have any questions or issues with the software. You will also be covered by the warranty policy that guarantees your satisfaction or your money back.

            -

            To purchase a legitimate license of Chromapure, you can visit the official website or authorized resellers. You can choose the version, meter, and bundle that suit your needs and budget. You can also check the system requirements, installation instructions, user manual, and FAQs before you buy. You can pay securely online using your credit card or PayPal account. You will receive an email confirmation with your license key and download link.

            -

            You can use online tools or services that provide video calibration without installing any software

            -

            A third way to avoid using cracked software is to use online tools or services that provide video calibration without installing any software. These are web-based applications or platforms that allow you to calibrate your display using your browser or mobile device. You do not need to download, install, or run any software on your computer. You just need an internet connection and a compatible device.

            -

            There are several online tools or services that provide video calibration without installing any software. Some examples are:

            - - - - - - - - - - - - - - - - - - - - - -
            NameDescriptionFeatures
            Lagom LCD TestA web-based tool that helps you test and adjust various aspects of your LCD monitor such as contrast, brightness, sharpness, gamma, viewing angle, clock and phase, response time, and color gradients.- Provides simple instructions and test patterns
            - Works with any browser and device
            - Free to use
            - No registration required
            CalibraeA web-based service that helps you calibrate your TV or monitor using your smartphone as a colorimeter. It uses your smartphone's camera to measure the colors displayed on your screen and adjusts them accordingly.- Provides step-by-step guidance and feedback
            - Works with any browser and device
            - Free trial available
            - Requires registration and subscription
            THX Tune-UpA mobile app that helps you calibrate your TV or projector using your smartphone as a remote control. It uses your smartphone's microphone to listen to special audio cues from your TV or projector and adjusts them accordingly.- Provides easy-to-follow instructions and tips
            - Works with iOS and Android devices
            - Free to download
            - Requires HDMI cable or Apple TV connection
            -

            These are just some of the online tools or services that provide video calibration without installing any software. You can search online for more options and compare their features and reviews. You can also check their terms of service, privacy policy, and customer support for more information and assistance.

            -

            Conclusion

            -

            In conclusion, Chromapure is a video calibration software that helps users achieve accurate colors and contrast on their displays. However

            In conclusion, Chromapure is a video calibration software that helps users achieve accurate colors and contrast on their displays. However, using a cracked version of Chromapure can expose you to various risks and consequences, such as malware infection, legal trouble, or poor performance and reliability. Therefore, you should avoid using cracked software and look for alternative ways to calibrate your display, such as using free or open-source software, purchasing a legitimate license, or using online tools or services. By doing so, you can enjoy the benefits of video calibration without compromising your security, privacy, or quality.

            -

            FAQs

            -

            Q1: How much does Chromapure cost?

            -

            A1: The price of Chromapure depends on the version (Standard or Professional), the meter (Display 3 PRO Colorimeter or i1Pro Spectrophotometer), and the bundle (Software Only or Software/Meter Bundle). The prices range from $199 to $4,995. You can visit the official website or authorized resellers to check the current prices and discounts.

            -

            Q2: How do I know if my software is cracked or not?

            -

            A2: There are some signs that can indicate if your software is cracked or not. For example, you may notice that your software has a different name, logo, or interface than the original one. You may also see messages or pop-ups that ask you to enter a license key, activate the software, or verify your identity. You may also experience errors, crashes, or performance issues with your software. If you suspect that your software is cracked, you should uninstall it immediately and scan your computer for malware.

            -

            Q3: What are some examples of free or open-source video calibration software?

            -

            A3: Some examples of free or open-source video calibration software are Calibrize, DisplayCAL, HCFR Colorimeter, and CalMAN. These software allow you to calibrate your display using various devices and methods. You can search online for more options and compare their features and reviews.

            -

            Q4: What are some online tools or services that provide video calibration?

            -

            A4: Some examples of online tools or services that provide video calibration are Lagom LCD Test, Calibrae, and THX Tune-Up. These tools or services allow you to calibrate your display using your browser or mobile device. You do not need to install any software on your computer. You can search online for more options and compare their features and reviews.

            -

            Q5: How can I report cracked software or websites that distribute it?

            -

            A5: If you encounter cracked software or websites that distribute it, you can report them to the software developers or owners, the internet service providers (ISPs), the web hosting providers, the domain name registrars, or the law enforcement agencies. You can also use online platforms such as Report Software Piracy or Cybercrime Reporting Portal to submit your reports anonymously. By reporting cracked software or websites that distribute it, you can help stop software piracy and protect yourself and others from its risks and consequences.

            b2dd77e56b
            -
            -
            \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/dist.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/dist.py deleted file mode 100644 index 824235488666c6ecdb22240b08354806fadb58ca..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/dist.py +++ /dev/null @@ -1,1222 +0,0 @@ -# -*- coding: utf-8 -*- -__all__ = ['Distribution'] - -import io -import sys -import re -import os -import warnings -import numbers -import distutils.log -import distutils.core -import distutils.cmd -import distutils.dist -import distutils.command -from distutils.util import strtobool -from distutils.debug import DEBUG -from distutils.fancy_getopt import translate_longopt -from glob import iglob -import itertools -import textwrap -from typing import List, Optional, TYPE_CHECKING -from pathlib import Path - -from collections import defaultdict -from email import message_from_file - -from distutils.errors import DistutilsOptionError, DistutilsSetupError -from distutils.util import rfc822_escape - -from setuptools.extern import packaging -from setuptools.extern import ordered_set -from setuptools.extern.more_itertools import unique_everseen, partition - -from ._importlib import metadata - -from . import SetuptoolsDeprecationWarning - -import setuptools -import setuptools.command -from setuptools import windows_support -from setuptools.monkey import get_unpatched -from setuptools.config import setupcfg, pyprojecttoml -from setuptools.discovery import ConfigDiscovery - -import pkg_resources -from setuptools.extern.packaging import version -from . import _reqs -from . import _entry_points - -if TYPE_CHECKING: - from email.message import Message - -__import__('setuptools.extern.packaging.specifiers') -__import__('setuptools.extern.packaging.version') - - -def _get_unpatched(cls): - warnings.warn("Do not call this function", DistDeprecationWarning) - return get_unpatched(cls) - - -def get_metadata_version(self): - mv = getattr(self, 'metadata_version', None) - if mv is None: - mv = version.Version('2.1') - self.metadata_version = mv - return mv - - -def rfc822_unescape(content: str) -> str: - """Reverse RFC-822 escaping by removing leading whitespaces from content.""" - lines = content.splitlines() - if len(lines) == 1: - return lines[0].lstrip() - return '\n'.join((lines[0].lstrip(), textwrap.dedent('\n'.join(lines[1:])))) - - -def _read_field_from_msg(msg: "Message", field: str) -> Optional[str]: - """Read Message header field.""" - value = msg[field] - if value == 'UNKNOWN': - return None - return value - - -def _read_field_unescaped_from_msg(msg: "Message", field: str) -> Optional[str]: - """Read Message header field and apply rfc822_unescape.""" - value = _read_field_from_msg(msg, field) - if value is None: - return value - return rfc822_unescape(value) - - -def _read_list_from_msg(msg: "Message", field: str) -> Optional[List[str]]: - """Read Message header field and return all results as list.""" - values = msg.get_all(field, None) - if values == []: - return None - return values - - -def _read_payload_from_msg(msg: "Message") -> Optional[str]: - value = msg.get_payload().strip() - if value == 'UNKNOWN' or not value: - return None - return value - - -def read_pkg_file(self, file): - """Reads the metadata values from a file object.""" - msg = message_from_file(file) - - self.metadata_version = version.Version(msg['metadata-version']) - self.name = _read_field_from_msg(msg, 'name') - self.version = _read_field_from_msg(msg, 'version') - self.description = _read_field_from_msg(msg, 'summary') - # we are filling author only. - self.author = _read_field_from_msg(msg, 'author') - self.maintainer = None - self.author_email = _read_field_from_msg(msg, 'author-email') - self.maintainer_email = None - self.url = _read_field_from_msg(msg, 'home-page') - self.download_url = _read_field_from_msg(msg, 'download-url') - self.license = _read_field_unescaped_from_msg(msg, 'license') - - self.long_description = _read_field_unescaped_from_msg(msg, 'description') - if ( - self.long_description is None and - self.metadata_version >= version.Version('2.1') - ): - self.long_description = _read_payload_from_msg(msg) - self.description = _read_field_from_msg(msg, 'summary') - - if 'keywords' in msg: - self.keywords = _read_field_from_msg(msg, 'keywords').split(',') - - self.platforms = _read_list_from_msg(msg, 'platform') - self.classifiers = _read_list_from_msg(msg, 'classifier') - - # PEP 314 - these fields only exist in 1.1 - if self.metadata_version == version.Version('1.1'): - self.requires = _read_list_from_msg(msg, 'requires') - self.provides = _read_list_from_msg(msg, 'provides') - self.obsoletes = _read_list_from_msg(msg, 'obsoletes') - else: - self.requires = None - self.provides = None - self.obsoletes = None - - self.license_files = _read_list_from_msg(msg, 'license-file') - - -def single_line(val): - """ - Quick and dirty validation for Summary pypa/setuptools#1390. - """ - if '\n' in val: - # TODO: Replace with `raise ValueError("newlines not allowed")` - # after reviewing #2893. - warnings.warn("newlines not allowed and will break in the future") - val = val.strip().split('\n')[0] - return val - - -# Based on Python 3.5 version -def write_pkg_file(self, file): # noqa: C901 # is too complex (14) # FIXME - """Write the PKG-INFO format data to a file object.""" - version = self.get_metadata_version() - - def write_field(key, value): - file.write("%s: %s\n" % (key, value)) - - write_field('Metadata-Version', str(version)) - write_field('Name', self.get_name()) - write_field('Version', self.get_version()) - - summary = self.get_description() - if summary: - write_field('Summary', single_line(summary)) - - optional_fields = ( - ('Home-page', 'url'), - ('Download-URL', 'download_url'), - ('Author', 'author'), - ('Author-email', 'author_email'), - ('Maintainer', 'maintainer'), - ('Maintainer-email', 'maintainer_email'), - ) - - for field, attr in optional_fields: - attr_val = getattr(self, attr, None) - if attr_val is not None: - write_field(field, attr_val) - - license = self.get_license() - if license: - write_field('License', rfc822_escape(license)) - - for project_url in self.project_urls.items(): - write_field('Project-URL', '%s, %s' % project_url) - - keywords = ','.join(self.get_keywords()) - if keywords: - write_field('Keywords', keywords) - - platforms = self.get_platforms() or [] - for platform in platforms: - write_field('Platform', platform) - - self._write_list(file, 'Classifier', self.get_classifiers()) - - # PEP 314 - self._write_list(file, 'Requires', self.get_requires()) - self._write_list(file, 'Provides', self.get_provides()) - self._write_list(file, 'Obsoletes', self.get_obsoletes()) - - # Setuptools specific for PEP 345 - if hasattr(self, 'python_requires'): - write_field('Requires-Python', self.python_requires) - - # PEP 566 - if self.long_description_content_type: - write_field('Description-Content-Type', self.long_description_content_type) - if self.provides_extras: - for extra in self.provides_extras: - write_field('Provides-Extra', extra) - - self._write_list(file, 'License-File', self.license_files or []) - - long_description = self.get_long_description() - if long_description: - file.write("\n%s" % long_description) - if not long_description.endswith("\n"): - file.write("\n") - - -sequence = tuple, list - - -def check_importable(dist, attr, value): - try: - ep = metadata.EntryPoint(value=value, name=None, group=None) - assert not ep.extras - except (TypeError, ValueError, AttributeError, AssertionError) as e: - raise DistutilsSetupError( - "%r must be importable 'module:attrs' string (got %r)" % (attr, value) - ) from e - - -def assert_string_list(dist, attr, value): - """Verify that value is a string list""" - try: - # verify that value is a list or tuple to exclude unordered - # or single-use iterables - assert isinstance(value, (list, tuple)) - # verify that elements of value are strings - assert ''.join(value) != value - except (TypeError, ValueError, AttributeError, AssertionError) as e: - raise DistutilsSetupError( - "%r must be a list of strings (got %r)" % (attr, value) - ) from e - - -def check_nsp(dist, attr, value): - """Verify that namespace packages are valid""" - ns_packages = value - assert_string_list(dist, attr, ns_packages) - for nsp in ns_packages: - if not dist.has_contents_for(nsp): - raise DistutilsSetupError( - "Distribution contains no modules or packages for " - + "namespace package %r" % nsp - ) - parent, sep, child = nsp.rpartition('.') - if parent and parent not in ns_packages: - distutils.log.warn( - "WARNING: %r is declared as a package namespace, but %r" - " is not: please correct this in setup.py", - nsp, - parent, - ) - msg = ( - "The namespace_packages parameter is deprecated, " - "consider using implicit namespaces instead (PEP 420)." - ) - warnings.warn(msg, SetuptoolsDeprecationWarning) - - -def check_extras(dist, attr, value): - """Verify that extras_require mapping is valid""" - try: - list(itertools.starmap(_check_extra, value.items())) - except (TypeError, ValueError, AttributeError) as e: - raise DistutilsSetupError( - "'extras_require' must be a dictionary whose values are " - "strings or lists of strings containing valid project/version " - "requirement specifiers." - ) from e - - -def _check_extra(extra, reqs): - name, sep, marker = extra.partition(':') - if marker and pkg_resources.invalid_marker(marker): - raise DistutilsSetupError("Invalid environment marker: " + marker) - list(_reqs.parse(reqs)) - - -def assert_bool(dist, attr, value): - """Verify that value is True, False, 0, or 1""" - if bool(value) != value: - tmpl = "{attr!r} must be a boolean value (got {value!r})" - raise DistutilsSetupError(tmpl.format(attr=attr, value=value)) - - -def invalid_unless_false(dist, attr, value): - if not value: - warnings.warn(f"{attr} is ignored.", DistDeprecationWarning) - return - raise DistutilsSetupError(f"{attr} is invalid.") - - -def check_requirements(dist, attr, value): - """Verify that install_requires is a valid requirements list""" - try: - list(_reqs.parse(value)) - if isinstance(value, (dict, set)): - raise TypeError("Unordered types are not allowed") - except (TypeError, ValueError) as error: - tmpl = ( - "{attr!r} must be a string or list of strings " - "containing valid project/version requirement specifiers; {error}" - ) - raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error - - -def check_specifier(dist, attr, value): - """Verify that value is a valid version specifier""" - try: - packaging.specifiers.SpecifierSet(value) - except (packaging.specifiers.InvalidSpecifier, AttributeError) as error: - tmpl = ( - "{attr!r} must be a string " "containing valid version specifiers; {error}" - ) - raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error - - -def check_entry_points(dist, attr, value): - """Verify that entry_points map is parseable""" - try: - _entry_points.load(value) - except Exception as e: - raise DistutilsSetupError(e) from e - - -def check_test_suite(dist, attr, value): - if not isinstance(value, str): - raise DistutilsSetupError("test_suite must be a string") - - -def check_package_data(dist, attr, value): - """Verify that value is a dictionary of package names to glob lists""" - if not isinstance(value, dict): - raise DistutilsSetupError( - "{!r} must be a dictionary mapping package names to lists of " - "string wildcard patterns".format(attr) - ) - for k, v in value.items(): - if not isinstance(k, str): - raise DistutilsSetupError( - "keys of {!r} dict must be strings (got {!r})".format(attr, k) - ) - assert_string_list(dist, 'values of {!r} dict'.format(attr), v) - - -def check_packages(dist, attr, value): - for pkgname in value: - if not re.match(r'\w+(\.\w+)*', pkgname): - distutils.log.warn( - "WARNING: %r not a valid package name; please use only " - ".-separated package names in setup.py", - pkgname, - ) - - -_Distribution = get_unpatched(distutils.core.Distribution) - - -class Distribution(_Distribution): - """Distribution with support for tests and package data - - This is an enhanced version of 'distutils.dist.Distribution' that - effectively adds the following new optional keyword arguments to 'setup()': - - 'install_requires' -- a string or sequence of strings specifying project - versions that the distribution requires when installed, in the format - used by 'pkg_resources.require()'. They will be installed - automatically when the package is installed. If you wish to use - packages that are not available in PyPI, or want to give your users an - alternate download location, you can add a 'find_links' option to the - '[easy_install]' section of your project's 'setup.cfg' file, and then - setuptools will scan the listed web pages for links that satisfy the - requirements. - - 'extras_require' -- a dictionary mapping names of optional "extras" to the - additional requirement(s) that using those extras incurs. For example, - this:: - - extras_require = dict(reST = ["docutils>=0.3", "reSTedit"]) - - indicates that the distribution can optionally provide an extra - capability called "reST", but it can only be used if docutils and - reSTedit are installed. If the user installs your package using - EasyInstall and requests one of your extras, the corresponding - additional requirements will be installed if needed. - - 'test_suite' -- the name of a test suite to run for the 'test' command. - If the user runs 'python setup.py test', the package will be installed, - and the named test suite will be run. The format is the same as - would be used on a 'unittest.py' command line. That is, it is the - dotted name of an object to import and call to generate a test suite. - - 'package_data' -- a dictionary mapping package names to lists of filenames - or globs to use to find data files contained in the named packages. - If the dictionary has filenames or globs listed under '""' (the empty - string), those names will be searched for in every package, in addition - to any names for the specific package. Data files found using these - names/globs will be installed along with the package, in the same - location as the package. Note that globs are allowed to reference - the contents of non-package subdirectories, as long as you use '/' as - a path separator. (Globs are automatically converted to - platform-specific paths at runtime.) - - In addition to these new keywords, this class also has several new methods - for manipulating the distribution's contents. For example, the 'include()' - and 'exclude()' methods can be thought of as in-place add and subtract - commands that add or remove packages, modules, extensions, and so on from - the distribution. - """ - - _DISTUTILS_UNSUPPORTED_METADATA = { - 'long_description_content_type': lambda: None, - 'project_urls': dict, - 'provides_extras': ordered_set.OrderedSet, - 'license_file': lambda: None, - 'license_files': lambda: None, - } - - _patched_dist = None - - def patch_missing_pkg_info(self, attrs): - # Fake up a replacement for the data that would normally come from - # PKG-INFO, but which might not yet be built if this is a fresh - # checkout. - # - if not attrs or 'name' not in attrs or 'version' not in attrs: - return - key = pkg_resources.safe_name(str(attrs['name'])).lower() - dist = pkg_resources.working_set.by_key.get(key) - if dist is not None and not dist.has_metadata('PKG-INFO'): - dist._version = pkg_resources.safe_version(str(attrs['version'])) - self._patched_dist = dist - - def __init__(self, attrs=None): - have_package_data = hasattr(self, "package_data") - if not have_package_data: - self.package_data = {} - attrs = attrs or {} - self.dist_files = [] - # Filter-out setuptools' specific options. - self.src_root = attrs.pop("src_root", None) - self.patch_missing_pkg_info(attrs) - self.dependency_links = attrs.pop('dependency_links', []) - self.setup_requires = attrs.pop('setup_requires', []) - for ep in metadata.entry_points(group='distutils.setup_keywords'): - vars(self).setdefault(ep.name, None) - _Distribution.__init__( - self, - { - k: v - for k, v in attrs.items() - if k not in self._DISTUTILS_UNSUPPORTED_METADATA - }, - ) - - # Save the original dependencies before they are processed into the egg format - self._orig_extras_require = {} - self._orig_install_requires = [] - self._tmp_extras_require = defaultdict(ordered_set.OrderedSet) - - self.set_defaults = ConfigDiscovery(self) - - self._set_metadata_defaults(attrs) - - self.metadata.version = self._normalize_version( - self._validate_version(self.metadata.version) - ) - self._finalize_requires() - - def _validate_metadata(self): - required = {"name"} - provided = { - key - for key in vars(self.metadata) - if getattr(self.metadata, key, None) is not None - } - missing = required - provided - - if missing: - msg = f"Required package metadata is missing: {missing}" - raise DistutilsSetupError(msg) - - def _set_metadata_defaults(self, attrs): - """ - Fill-in missing metadata fields not supported by distutils. - Some fields may have been set by other tools (e.g. pbr). - Those fields (vars(self.metadata)) take precedence to - supplied attrs. - """ - for option, default in self._DISTUTILS_UNSUPPORTED_METADATA.items(): - vars(self.metadata).setdefault(option, attrs.get(option, default())) - - @staticmethod - def _normalize_version(version): - if isinstance(version, setuptools.sic) or version is None: - return version - - normalized = str(packaging.version.Version(version)) - if version != normalized: - tmpl = "Normalizing '{version}' to '{normalized}'" - warnings.warn(tmpl.format(**locals())) - return normalized - return version - - @staticmethod - def _validate_version(version): - if isinstance(version, numbers.Number): - # Some people apparently take "version number" too literally :) - version = str(version) - - if version is not None: - try: - packaging.version.Version(version) - except (packaging.version.InvalidVersion, TypeError): - warnings.warn( - "The version specified (%r) is an invalid version, this " - "may not work as expected with newer versions of " - "setuptools, pip, and PyPI. Please see PEP 440 for more " - "details." % version - ) - return setuptools.sic(version) - return version - - def _finalize_requires(self): - """ - Set `metadata.python_requires` and fix environment markers - in `install_requires` and `extras_require`. - """ - if getattr(self, 'python_requires', None): - self.metadata.python_requires = self.python_requires - - if getattr(self, 'extras_require', None): - # Save original before it is messed by _convert_extras_requirements - self._orig_extras_require = self._orig_extras_require or self.extras_require - for extra in self.extras_require.keys(): - # Since this gets called multiple times at points where the - # keys have become 'converted' extras, ensure that we are only - # truly adding extras we haven't seen before here. - extra = extra.split(':')[0] - if extra: - self.metadata.provides_extras.add(extra) - - if getattr(self, 'install_requires', None) and not self._orig_install_requires: - # Save original before it is messed by _move_install_requirements_markers - self._orig_install_requires = self.install_requires - - self._convert_extras_requirements() - self._move_install_requirements_markers() - - def _convert_extras_requirements(self): - """ - Convert requirements in `extras_require` of the form - `"extra": ["barbazquux; {marker}"]` to - `"extra:{marker}": ["barbazquux"]`. - """ - spec_ext_reqs = getattr(self, 'extras_require', None) or {} - tmp = defaultdict(ordered_set.OrderedSet) - self._tmp_extras_require = getattr(self, '_tmp_extras_require', tmp) - for section, v in spec_ext_reqs.items(): - # Do not strip empty sections. - self._tmp_extras_require[section] - for r in _reqs.parse(v): - suffix = self._suffix_for(r) - self._tmp_extras_require[section + suffix].append(r) - - @staticmethod - def _suffix_for(req): - """ - For a requirement, return the 'extras_require' suffix for - that requirement. - """ - return ':' + str(req.marker) if req.marker else '' - - def _move_install_requirements_markers(self): - """ - Move requirements in `install_requires` that are using environment - markers `extras_require`. - """ - - # divide the install_requires into two sets, simple ones still - # handled by install_requires and more complex ones handled - # by extras_require. - - def is_simple_req(req): - return not req.marker - - spec_inst_reqs = getattr(self, 'install_requires', None) or () - inst_reqs = list(_reqs.parse(spec_inst_reqs)) - simple_reqs = filter(is_simple_req, inst_reqs) - complex_reqs = itertools.filterfalse(is_simple_req, inst_reqs) - self.install_requires = list(map(str, simple_reqs)) - - for r in complex_reqs: - self._tmp_extras_require[':' + str(r.marker)].append(r) - self.extras_require = dict( - # list(dict.fromkeys(...)) ensures a list of unique strings - (k, list(dict.fromkeys(str(r) for r in map(self._clean_req, v)))) - for k, v in self._tmp_extras_require.items() - ) - - def _clean_req(self, req): - """ - Given a Requirement, remove environment markers and return it. - """ - req.marker = None - return req - - def _finalize_license_files(self): - """Compute names of all license files which should be included.""" - license_files: Optional[List[str]] = self.metadata.license_files - patterns: List[str] = license_files if license_files else [] - - license_file: Optional[str] = self.metadata.license_file - if license_file and license_file not in patterns: - patterns.append(license_file) - - if license_files is None and license_file is None: - # Default patterns match the ones wheel uses - # See https://wheel.readthedocs.io/en/stable/user_guide.html - # -> 'Including license files in the generated wheel file' - patterns = ('LICEN[CS]E*', 'COPYING*', 'NOTICE*', 'AUTHORS*') - - self.metadata.license_files = list( - unique_everseen(self._expand_patterns(patterns)) - ) - - @staticmethod - def _expand_patterns(patterns): - """ - >>> list(Distribution._expand_patterns(['LICENSE'])) - ['LICENSE'] - >>> list(Distribution._expand_patterns(['setup.cfg', 'LIC*'])) - ['setup.cfg', 'LICENSE'] - """ - return ( - path - for pattern in patterns - for path in sorted(iglob(pattern)) - if not path.endswith('~') and os.path.isfile(path) - ) - - # FIXME: 'Distribution._parse_config_files' is too complex (14) - def _parse_config_files(self, filenames=None): # noqa: C901 - """ - Adapted from distutils.dist.Distribution.parse_config_files, - this method provides the same functionality in subtly-improved - ways. - """ - from configparser import ConfigParser - - # Ignore install directory options if we have a venv - ignore_options = ( - [] - if sys.prefix == sys.base_prefix - else [ - 'install-base', - 'install-platbase', - 'install-lib', - 'install-platlib', - 'install-purelib', - 'install-headers', - 'install-scripts', - 'install-data', - 'prefix', - 'exec-prefix', - 'home', - 'user', - 'root', - ] - ) - - ignore_options = frozenset(ignore_options) - - if filenames is None: - filenames = self.find_config_files() - - if DEBUG: - self.announce("Distribution.parse_config_files():") - - parser = ConfigParser() - parser.optionxform = str - for filename in filenames: - with io.open(filename, encoding='utf-8') as reader: - if DEBUG: - self.announce(" reading {filename}".format(**locals())) - parser.read_file(reader) - for section in parser.sections(): - options = parser.options(section) - opt_dict = self.get_option_dict(section) - - for opt in options: - if opt == '__name__' or opt in ignore_options: - continue - - val = parser.get(section, opt) - opt = self.warn_dash_deprecation(opt, section) - opt = self.make_option_lowercase(opt, section) - opt_dict[opt] = (filename, val) - - # Make the ConfigParser forget everything (so we retain - # the original filenames that options come from) - parser.__init__() - - if 'global' not in self.command_options: - return - - # If there was a "global" section in the config file, use it - # to set Distribution options. - - for (opt, (src, val)) in self.command_options['global'].items(): - alias = self.negative_opt.get(opt) - if alias: - val = not strtobool(val) - elif opt in ('verbose', 'dry_run'): # ugh! - val = strtobool(val) - - try: - setattr(self, alias or opt, val) - except ValueError as e: - raise DistutilsOptionError(e) from e - - def warn_dash_deprecation(self, opt, section): - if section in ( - 'options.extras_require', - 'options.data_files', - ): - return opt - - underscore_opt = opt.replace('-', '_') - commands = list(itertools.chain( - distutils.command.__all__, - self._setuptools_commands(), - )) - if ( - not section.startswith('options') - and section != 'metadata' - and section not in commands - ): - return underscore_opt - - if '-' in opt: - warnings.warn( - "Usage of dash-separated '%s' will not be supported in future " - "versions. Please use the underscore name '%s' instead" - % (opt, underscore_opt) - ) - return underscore_opt - - def _setuptools_commands(self): - try: - return metadata.distribution('setuptools').entry_points.names - except metadata.PackageNotFoundError: - # during bootstrapping, distribution doesn't exist - return [] - - def make_option_lowercase(self, opt, section): - if section != 'metadata' or opt.islower(): - return opt - - lowercase_opt = opt.lower() - warnings.warn( - "Usage of uppercase key '%s' in '%s' will be deprecated in future " - "versions. Please use lowercase '%s' instead" - % (opt, section, lowercase_opt) - ) - return lowercase_opt - - # FIXME: 'Distribution._set_command_options' is too complex (14) - def _set_command_options(self, command_obj, option_dict=None): # noqa: C901 - """ - Set the options for 'command_obj' from 'option_dict'. Basically - this means copying elements of a dictionary ('option_dict') to - attributes of an instance ('command'). - - 'command_obj' must be a Command instance. If 'option_dict' is not - supplied, uses the standard option dictionary for this command - (from 'self.command_options'). - - (Adopted from distutils.dist.Distribution._set_command_options) - """ - command_name = command_obj.get_command_name() - if option_dict is None: - option_dict = self.get_option_dict(command_name) - - if DEBUG: - self.announce(" setting options for '%s' command:" % command_name) - for (option, (source, value)) in option_dict.items(): - if DEBUG: - self.announce(" %s = %s (from %s)" % (option, value, source)) - try: - bool_opts = [translate_longopt(o) for o in command_obj.boolean_options] - except AttributeError: - bool_opts = [] - try: - neg_opt = command_obj.negative_opt - except AttributeError: - neg_opt = {} - - try: - is_string = isinstance(value, str) - if option in neg_opt and is_string: - setattr(command_obj, neg_opt[option], not strtobool(value)) - elif option in bool_opts and is_string: - setattr(command_obj, option, strtobool(value)) - elif hasattr(command_obj, option): - setattr(command_obj, option, value) - else: - raise DistutilsOptionError( - "error in %s: command '%s' has no such option '%s'" - % (source, command_name, option) - ) - except ValueError as e: - raise DistutilsOptionError(e) from e - - def _get_project_config_files(self, filenames): - """Add default file and split between INI and TOML""" - tomlfiles = [] - standard_project_metadata = Path(self.src_root or os.curdir, "pyproject.toml") - if filenames is not None: - parts = partition(lambda f: Path(f).suffix == ".toml", filenames) - filenames = list(parts[0]) # 1st element => predicate is False - tomlfiles = list(parts[1]) # 2nd element => predicate is True - elif standard_project_metadata.exists(): - tomlfiles = [standard_project_metadata] - return filenames, tomlfiles - - def parse_config_files(self, filenames=None, ignore_option_errors=False): - """Parses configuration files from various levels - and loads configuration. - """ - inifiles, tomlfiles = self._get_project_config_files(filenames) - - self._parse_config_files(filenames=inifiles) - - setupcfg.parse_configuration( - self, self.command_options, ignore_option_errors=ignore_option_errors - ) - for filename in tomlfiles: - pyprojecttoml.apply_configuration(self, filename, ignore_option_errors) - - self._finalize_requires() - self._finalize_license_files() - - def fetch_build_eggs(self, requires): - """Resolve pre-setup requirements""" - resolved_dists = pkg_resources.working_set.resolve( - _reqs.parse(requires), - installer=self.fetch_build_egg, - replace_conflicting=True, - ) - for dist in resolved_dists: - pkg_resources.working_set.add(dist, replace=True) - return resolved_dists - - def finalize_options(self): - """ - Allow plugins to apply arbitrary operations to the - distribution. Each hook may optionally define a 'order' - to influence the order of execution. Smaller numbers - go first and the default is 0. - """ - group = 'setuptools.finalize_distribution_options' - - def by_order(hook): - return getattr(hook, 'order', 0) - - defined = metadata.entry_points(group=group) - filtered = itertools.filterfalse(self._removed, defined) - loaded = map(lambda e: e.load(), filtered) - for ep in sorted(loaded, key=by_order): - ep(self) - - @staticmethod - def _removed(ep): - """ - When removing an entry point, if metadata is loaded - from an older version of Setuptools, that removed - entry point will attempt to be loaded and will fail. - See #2765 for more details. - """ - removed = { - # removed 2021-09-05 - '2to3_doctests', - } - return ep.name in removed - - def _finalize_setup_keywords(self): - for ep in metadata.entry_points(group='distutils.setup_keywords'): - value = getattr(self, ep.name, None) - if value is not None: - ep.load()(self, ep.name, value) - - def get_egg_cache_dir(self): - egg_cache_dir = os.path.join(os.curdir, '.eggs') - if not os.path.exists(egg_cache_dir): - os.mkdir(egg_cache_dir) - windows_support.hide_file(egg_cache_dir) - readme_txt_filename = os.path.join(egg_cache_dir, 'README.txt') - with open(readme_txt_filename, 'w') as f: - f.write( - 'This directory contains eggs that were downloaded ' - 'by setuptools to build, test, and run plug-ins.\n\n' - ) - f.write( - 'This directory caches those eggs to prevent ' - 'repeated downloads.\n\n' - ) - f.write('However, it is safe to delete this directory.\n\n') - - return egg_cache_dir - - def fetch_build_egg(self, req): - """Fetch an egg needed for building""" - from setuptools.installer import fetch_build_egg - - return fetch_build_egg(self, req) - - def get_command_class(self, command): - """Pluggable version of get_command_class()""" - if command in self.cmdclass: - return self.cmdclass[command] - - eps = metadata.entry_points(group='distutils.commands', name=command) - for ep in eps: - self.cmdclass[command] = cmdclass = ep.load() - return cmdclass - else: - return _Distribution.get_command_class(self, command) - - def print_commands(self): - for ep in metadata.entry_points(group='distutils.commands'): - if ep.name not in self.cmdclass: - cmdclass = ep.load() - self.cmdclass[ep.name] = cmdclass - return _Distribution.print_commands(self) - - def get_command_list(self): - for ep in metadata.entry_points(group='distutils.commands'): - if ep.name not in self.cmdclass: - cmdclass = ep.load() - self.cmdclass[ep.name] = cmdclass - return _Distribution.get_command_list(self) - - def include(self, **attrs): - """Add items to distribution that are named in keyword arguments - - For example, 'dist.include(py_modules=["x"])' would add 'x' to - the distribution's 'py_modules' attribute, if it was not already - there. - - Currently, this method only supports inclusion for attributes that are - lists or tuples. If you need to add support for adding to other - attributes in this or a subclass, you can add an '_include_X' method, - where 'X' is the name of the attribute. The method will be called with - the value passed to 'include()'. So, 'dist.include(foo={"bar":"baz"})' - will try to call 'dist._include_foo({"bar":"baz"})', which can then - handle whatever special inclusion logic is needed. - """ - for k, v in attrs.items(): - include = getattr(self, '_include_' + k, None) - if include: - include(v) - else: - self._include_misc(k, v) - - def exclude_package(self, package): - """Remove packages, modules, and extensions in named package""" - - pfx = package + '.' - if self.packages: - self.packages = [ - p for p in self.packages if p != package and not p.startswith(pfx) - ] - - if self.py_modules: - self.py_modules = [ - p for p in self.py_modules if p != package and not p.startswith(pfx) - ] - - if self.ext_modules: - self.ext_modules = [ - p - for p in self.ext_modules - if p.name != package and not p.name.startswith(pfx) - ] - - def has_contents_for(self, package): - """Return true if 'exclude_package(package)' would do something""" - - pfx = package + '.' - - for p in self.iter_distribution_names(): - if p == package or p.startswith(pfx): - return True - - def _exclude_misc(self, name, value): - """Handle 'exclude()' for list/tuple attrs without a special handler""" - if not isinstance(value, sequence): - raise DistutilsSetupError( - "%s: setting must be a list or tuple (%r)" % (name, value) - ) - try: - old = getattr(self, name) - except AttributeError as e: - raise DistutilsSetupError("%s: No such distribution setting" % name) from e - if old is not None and not isinstance(old, sequence): - raise DistutilsSetupError( - name + ": this setting cannot be changed via include/exclude" - ) - elif old: - setattr(self, name, [item for item in old if item not in value]) - - def _include_misc(self, name, value): - """Handle 'include()' for list/tuple attrs without a special handler""" - - if not isinstance(value, sequence): - raise DistutilsSetupError("%s: setting must be a list (%r)" % (name, value)) - try: - old = getattr(self, name) - except AttributeError as e: - raise DistutilsSetupError("%s: No such distribution setting" % name) from e - if old is None: - setattr(self, name, value) - elif not isinstance(old, sequence): - raise DistutilsSetupError( - name + ": this setting cannot be changed via include/exclude" - ) - else: - new = [item for item in value if item not in old] - setattr(self, name, old + new) - - def exclude(self, **attrs): - """Remove items from distribution that are named in keyword arguments - - For example, 'dist.exclude(py_modules=["x"])' would remove 'x' from - the distribution's 'py_modules' attribute. Excluding packages uses - the 'exclude_package()' method, so all of the package's contained - packages, modules, and extensions are also excluded. - - Currently, this method only supports exclusion from attributes that are - lists or tuples. If you need to add support for excluding from other - attributes in this or a subclass, you can add an '_exclude_X' method, - where 'X' is the name of the attribute. The method will be called with - the value passed to 'exclude()'. So, 'dist.exclude(foo={"bar":"baz"})' - will try to call 'dist._exclude_foo({"bar":"baz"})', which can then - handle whatever special exclusion logic is needed. - """ - for k, v in attrs.items(): - exclude = getattr(self, '_exclude_' + k, None) - if exclude: - exclude(v) - else: - self._exclude_misc(k, v) - - def _exclude_packages(self, packages): - if not isinstance(packages, sequence): - raise DistutilsSetupError( - "packages: setting must be a list or tuple (%r)" % (packages,) - ) - list(map(self.exclude_package, packages)) - - def _parse_command_opts(self, parser, args): - # Remove --with-X/--without-X options when processing command args - self.global_options = self.__class__.global_options - self.negative_opt = self.__class__.negative_opt - - # First, expand any aliases - command = args[0] - aliases = self.get_option_dict('aliases') - while command in aliases: - src, alias = aliases[command] - del aliases[command] # ensure each alias can expand only once! - import shlex - - args[:1] = shlex.split(alias, True) - command = args[0] - - nargs = _Distribution._parse_command_opts(self, parser, args) - - # Handle commands that want to consume all remaining arguments - cmd_class = self.get_command_class(command) - if getattr(cmd_class, 'command_consumes_arguments', None): - self.get_option_dict(command)['args'] = ("command line", nargs) - if nargs is not None: - return [] - - return nargs - - def get_cmdline_options(self): - """Return a '{cmd: {opt:val}}' map of all command-line options - - Option names are all long, but do not include the leading '--', and - contain dashes rather than underscores. If the option doesn't take - an argument (e.g. '--quiet'), the 'val' is 'None'. - - Note that options provided by config files are intentionally excluded. - """ - - d = {} - - for cmd, opts in self.command_options.items(): - - for opt, (src, val) in opts.items(): - - if src != "command line": - continue - - opt = opt.replace('_', '-') - - if val == 0: - cmdobj = self.get_command_obj(cmd) - neg_opt = self.negative_opt.copy() - neg_opt.update(getattr(cmdobj, 'negative_opt', {})) - for neg, pos in neg_opt.items(): - if pos == opt: - opt = neg - val = None - break - else: - raise AssertionError("Shouldn't be able to get here") - - elif val == 1: - val = None - - d.setdefault(cmd, {})[opt] = val - - return d - - def iter_distribution_names(self): - """Yield all packages, modules, and extension names in distribution""" - - for pkg in self.packages or (): - yield pkg - - for module in self.py_modules or (): - yield module - - for ext in self.ext_modules or (): - if isinstance(ext, tuple): - name, buildinfo = ext - else: - name = ext.name - if name.endswith('module'): - name = name[:-6] - yield name - - def handle_display_options(self, option_order): - """If there were any non-global "display-only" options - (--help-commands or the metadata display options) on the command - line, display the requested info and return true; else return - false. - """ - import sys - - if self.help_commands: - return _Distribution.handle_display_options(self, option_order) - - # Stdout may be StringIO (e.g. in tests) - if not isinstance(sys.stdout, io.TextIOWrapper): - return _Distribution.handle_display_options(self, option_order) - - # Don't wrap stdout if utf-8 is already the encoding. Provides - # workaround for #334. - if sys.stdout.encoding.lower() in ('utf-8', 'utf8'): - return _Distribution.handle_display_options(self, option_order) - - # Print metadata in UTF-8 no matter the platform - encoding = sys.stdout.encoding - errors = sys.stdout.errors - newline = sys.platform != 'win32' and '\n' or None - line_buffering = sys.stdout.line_buffering - - sys.stdout = io.TextIOWrapper( - sys.stdout.detach(), 'utf-8', errors, newline, line_buffering - ) - try: - return _Distribution.handle_display_options(self, option_order) - finally: - sys.stdout = io.TextIOWrapper( - sys.stdout.detach(), encoding, errors, newline, line_buffering - ) - - def run_command(self, command): - self.set_defaults() - # Postpone defaults until all explicit configuration is considered - # (setup() args, config files, command line and plugins) - - super().run_command(command) - - -class DistDeprecationWarning(SetuptoolsDeprecationWarning): - """Class for warning about deprecations in dist in - setuptools. Not ignored by default, unlike DeprecationWarning.""" diff --git a/spaces/tomandandy/MusicGen3/tests/modules/test_transformer.py b/spaces/tomandandy/MusicGen3/tests/modules/test_transformer.py deleted file mode 100644 index ff7dfe4c2de05112aec55ddea9c8fd978668f80b..0000000000000000000000000000000000000000 --- a/spaces/tomandandy/MusicGen3/tests/modules/test_transformer.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.transformer import ( - StreamingMultiheadAttention, StreamingTransformer, set_efficient_attention_backend) - - -def test_transformer_causal_streaming(): - torch.manual_seed(1234) - - for context, custom in product([None, 10], [False, True]): - # Test that causality and receptive fields are properly handled. - # looking at the gradients - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=custom, - dropout=0.) - steps = 20 - for k in [0, 10, 15, 19]: - x = torch.randn(4, steps, 16, requires_grad=True) - y = tr(x) - y[:, k].abs().sum().backward() - if k + 1 < steps: - assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm() - assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm() - if context is not None and k > context: - limit = k - context - 1 - assert torch.allclose(x.grad[:, :limit], - torch.tensor(0.)), x.grad[:, :limit].norm() - - # Now check that streaming gives the same result at batch eval. - x = torch.randn(4, steps, 16) - y = tr(x) - ys = [] - with tr.streaming(): - for k in range(steps): - chunk = x[:, k:k + 1, :] - ys.append(tr(chunk)) - y_stream = torch.cat(ys, dim=1) - delta = torch.norm(y_stream - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_transformer_vs_pytorch(): - torch.manual_seed(1234) - # Check that in the non causal setting, we get the same result as - # PyTorch Transformer encoder. - for custom in [False, True]: - tr = StreamingTransformer( - 16, 4, 2, - causal=False, custom=custom, dropout=0., positional_scale=0.) - layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True) - tr_ref = torch.nn.TransformerEncoder(layer, 2) - tr.load_state_dict(tr_ref.state_dict()) - - x = torch.randn(4, 20, 16) - y = tr(x) - y2 = tr_ref(x) - delta = torch.norm(y2 - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_streaming_api(): - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.) - tr.eval() - steps = 12 - x = torch.randn(1, steps, 16) - - with torch.no_grad(): - with tr.streaming(): - _ = tr(x[:, :1]) - state = {k: v.clone() for k, v in tr.get_streaming_state().items()} - y = tr(x[:, 1:2]) - tr.set_streaming_state(state) - y2 = tr(x[:, 1:2]) - assert torch.allclose(y, y2), (y - y2).norm() - assert tr.flush() is None - - -def test_memory_efficient(): - for backend in ['torch', 'xformers']: - torch.manual_seed(1234) - set_efficient_attention_backend(backend) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1) - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - assert torch.allclose(y, y2), ((y - y2).norm(), backend) - - -def test_attention_as_float32(): - torch.manual_seed(1234) - cases = [ - {'custom': True}, - {'custom': False}, - ] - for case in cases: - tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case) - tr_float32 = StreamingTransformer( - 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case) - if not case['custom']: - # we are not using autocast here because it doesn't really - # work as expected on CPU, so we have to manually cast the weights of the MHA. - for layer in tr_float32.layers: - layer.self_attn.mha.to(torch.float32) - tr_float32.load_state_dict(tr.state_dict()) - steps = 12 - x = torch.randn(3, steps, 16, dtype=torch.bfloat16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_float32(x) - assert not torch.allclose(y, y2), (y - y2).norm() - - -@torch.no_grad() -def test_streaming_memory_efficient(): - for backend in ['torch', 'xformers']: - torch.manual_seed(1234) - set_efficient_attention_backend(backend) - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, causal=True) - tr.load_state_dict(tr_mem_efficient.state_dict()) - tr.eval() - tr_mem_efficient.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr_mem_efficient.streaming(): - outs = [] - # frame_sizes = [2] + [1] * (steps - 2) - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr_mem_efficient(frame)) - - out = torch.cat(outs, dim=1) - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_cross_attention(): - torch.manual_seed(1234) - for norm_first in [True, False]: - m = StreamingTransformer( - 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True) - m_cross = StreamingTransformer( - 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True) - m_cross.load_state_dict(m.state_dict(), strict=False) - x = torch.randn(2, 5, 16) - cross_x = torch.randn(2, 3, 16) - y_ref = m(x) - y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x) - # With norm_first, the two should be exactly yhe same, - # but with norm_first=False, we get 2 normalization in a row - # and the epsilon value leads to a tiny change. - atol = 0. if norm_first else 1e-6 - print((y_ref - y_cross_zero).norm() / y_ref.norm()) - assert torch.allclose(y_ref, y_cross_zero, atol=atol) - - # We now expect a difference even with a generous atol of 1e-2. - y_cross = m_cross(x, cross_attention_src=cross_x) - assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2) - - with pytest.raises(AssertionError): - _ = m_cross(x) - _ = m(x, cross_attention_src=cross_x) - - -def test_cross_attention_compat(): - torch.manual_seed(1234) - num_heads = 2 - dim = num_heads * 64 - with pytest.raises(AssertionError): - StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True) - - cross_attn = StreamingMultiheadAttention( - dim, num_heads, dropout=0, cross_attention=True, custom=True) - ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True) - - # We can load the regular attention state dict - # so we have compat when loading old checkpoints. - cross_attn.load_state_dict(ref_attn.state_dict()) - - queries = torch.randn(3, 7, dim) - keys = torch.randn(3, 9, dim) - values = torch.randn(3, 9, dim) - - y = cross_attn(queries, keys, values)[0] - y_ref = ref_attn(queries, keys, values)[0] - assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm() - - # Now let's check that streaming is working properly. - with cross_attn.streaming(): - ys = [] - for step in range(queries.shape[1]): - ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0]) - y_streaming = torch.cat(ys, dim=1) - assert torch.allclose(y_streaming, y, atol=1e-7) - - -def test_repeat_kv(): - torch.manual_seed(1234) - num_heads = 8 - kv_repeat = 4 - dim = num_heads * 64 - with pytest.raises(AssertionError): - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True) - x = torch.randn(4, 18, dim) - y = mha(x, x, x)[0] - assert x.shape == y.shape - - -def test_qk_layer_norm(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False) - steps = 12 - x = torch.randn(3, steps, 16) - y = tr(x) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True) - z = torch.randn(3, 21, 16) - y = tr(x, cross_attention_src=z) - assert y.shape == x.shape diff --git a/spaces/tomofi/MMOCR/configs/_base_/det_models/dbnet_r18_fpnc.py b/spaces/tomofi/MMOCR/configs/_base_/det_models/dbnet_r18_fpnc.py deleted file mode 100644 index 7507605d84f602dbfc0ce3b6b0519add917afe5f..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/configs/_base_/det_models/dbnet_r18_fpnc.py +++ /dev/null @@ -1,21 +0,0 @@ -model = dict( - type='DBNet', - backbone=dict( - type='mmdet.ResNet', - depth=18, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'), - norm_eval=False, - style='caffe'), - neck=dict( - type='FPNC', in_channels=[64, 128, 256, 512], lateral_channels=256), - bbox_head=dict( - type='DBHead', - in_channels=256, - loss=dict(type='DBLoss', alpha=5.0, beta=10.0, bbce_loss=True), - postprocessor=dict(type='DBPostprocessor', text_repr_type='quad')), - train_cfg=None, - test_cfg=None) diff --git a/spaces/tomofi/MMOCR/tests/test_apis/test_model_inference.py b/spaces/tomofi/MMOCR/tests/test_apis/test_model_inference.py deleted file mode 100644 index 9c09fa80b84b258e40e678bc19cffdc8d86ab0ff..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/tests/test_apis/test_model_inference.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import platform - -import pytest -from mmcv.image import imread - -from mmocr.apis.inference import init_detector, model_inference -from mmocr.datasets import build_dataset # noqa: F401 -from mmocr.models import build_detector # noqa: F401 -from mmocr.utils import revert_sync_batchnorm - - -def build_model(config_file): - device = 'cpu' - model = init_detector(config_file, checkpoint=None, device=device) - model = revert_sync_batchnorm(model) - - return model - - -@pytest.mark.skipif( - platform.system() == 'Windows', - reason='Win container on Github Action does not have enough RAM to run') -@pytest.mark.parametrize('cfg_file', [ - '../configs/textrecog/sar/sar_r31_parallel_decoder_academic.py', - '../configs/textrecog/abinet/abinet_academic.py', - '../configs/textrecog/crnn/crnn_academic_dataset.py', - '../configs/textrecog/seg/seg_r31_1by16_fpnocr_academic.py', - '../configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2017.py' -]) -def test_model_inference(cfg_file): - tmp_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) - config_file = os.path.join(tmp_dir, cfg_file) - model = build_model(config_file) - with pytest.raises(AssertionError): - model_inference(model, 1) - - sample_img_path = os.path.join(tmp_dir, '../demo/demo_text_det.jpg') - model_inference(model, sample_img_path) - - # numpy inference - img = imread(sample_img_path) - - model_inference(model, img) - - -@pytest.mark.skipif( - platform.system() == 'Windows', - reason='Win container on Github Action does not have enough RAM to run') -@pytest.mark.parametrize( - 'cfg_file', - ['../configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2017.py']) -def test_model_batch_inference_det(cfg_file): - tmp_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) - config_file = os.path.join(tmp_dir, cfg_file) - model = build_model(config_file) - - sample_img_path = os.path.join(tmp_dir, '../demo/demo_text_det.jpg') - results = model_inference(model, [sample_img_path], batch_mode=True) - - assert len(results) == 1 - - # numpy inference - img = imread(sample_img_path) - results = model_inference(model, [img], batch_mode=True) - - assert len(results) == 1 - - -@pytest.mark.parametrize('cfg_file', [ - '../configs/textrecog/sar/sar_r31_parallel_decoder_academic.py', -]) -def test_model_batch_inference_raises_exception_error_aug_test_recog(cfg_file): - tmp_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) - config_file = os.path.join(tmp_dir, cfg_file) - model = build_model(config_file) - - with pytest.raises( - Exception, - match='aug test does not support inference with batch size'): - sample_img_path = os.path.join(tmp_dir, '../demo/demo_text_det.jpg') - model_inference(model, [sample_img_path, sample_img_path]) - - with pytest.raises( - Exception, - match='aug test does not support inference with batch size'): - img = imread(sample_img_path) - model_inference(model, [img, img]) - - -@pytest.mark.parametrize('cfg_file', [ - '../configs/textrecog/sar/sar_r31_parallel_decoder_academic.py', -]) -def test_model_batch_inference_recog(cfg_file): - tmp_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) - config_file = os.path.join(tmp_dir, cfg_file) - model = build_model(config_file) - - sample_img_path = os.path.join(tmp_dir, '../demo/demo_text_recog.jpg') - results = model_inference( - model, [sample_img_path, sample_img_path], batch_mode=True) - - assert len(results) == 2 - - # numpy inference - img = imread(sample_img_path) - results = model_inference(model, [img, img], batch_mode=True) - - assert len(results) == 2 - - -@pytest.mark.parametrize( - 'cfg_file', - ['../configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2017.py']) -def test_model_batch_inference_empty_detection(cfg_file): - tmp_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) - config_file = os.path.join(tmp_dir, cfg_file) - model = build_model(config_file) - - empty_detection = [] - - with pytest.raises( - Exception, - match='empty imgs provided, please check and try again'): - - model_inference(model, empty_detection, batch_mode=True) diff --git a/spaces/tomofi/NDLOCR/src/text_recognition/deep-text-recognition-benchmark/utils.py b/spaces/tomofi/NDLOCR/src/text_recognition/deep-text-recognition-benchmark/utils.py deleted file mode 100644 index 6fb98e78b5962cdcbbef14494002709166ef1efe..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/text_recognition/deep-text-recognition-benchmark/utils.py +++ /dev/null @@ -1,173 +0,0 @@ -import torch -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - - -class CTCLabelConverter(object): - """ Convert between text-label and text-index """ - - def __init__(self, character): - # character (str): set of the possible characters. - dict_character = list(character) - - self.dict = {} - for i, char in enumerate(dict_character): - # NOTE: 0 is reserved for 'CTCblank' token required by CTCLoss - self.dict[char] = i + 1 - - self.character = ['[CTCblank]'] + dict_character # dummy '[CTCblank]' token for CTCLoss (index 0) - - def encode(self, text, batch_max_length=25): - """convert text-label into text-index. - input: - text: text labels of each image. [batch_size] - batch_max_length: max length of text label in the batch. 25 by default - - output: - text: text index for CTCLoss. [batch_size, batch_max_length] - length: length of each text. [batch_size] - """ - length = [len(s) for s in text] - - # The index used for padding (=0) would not affect the CTC loss calculation. - batch_text = torch.LongTensor(len(text), batch_max_length).fill_(0) - for i, t in enumerate(text): - text = list(t) - try: - text = [self.dict[char] for char in text] - except Exception as e: - print(text) - raise e - batch_text[i][:len(text)] = torch.LongTensor(text) - return (batch_text.to(device), torch.IntTensor(length).to(device)) - - def decode(self, text_index, length): - """ convert text-index into text-label. """ - texts = [] - for index, l in enumerate(length): - t = text_index[index, :] - - char_list = [] - for i in range(l): - if t[i] != 0 and (not (i > 0 and t[i - 1] == t[i])): # removing repeated characters and blank. - char_list.append(self.character[t[i]]) - text = ''.join(char_list) - - texts.append(text) - return texts - - -class CTCLabelConverterForBaiduWarpctc(object): - """ Convert between text-label and text-index for baidu warpctc """ - - def __init__(self, character): - # character (str): set of the possible characters. - dict_character = list(character) - - self.dict = {} - for i, char in enumerate(dict_character): - # NOTE: 0 is reserved for 'CTCblank' token required by CTCLoss - self.dict[char] = i + 1 - - self.character = ['[CTCblank]'] + dict_character # dummy '[CTCblank]' token for CTCLoss (index 0) - - def encode(self, text, batch_max_length=25): - """convert text-label into text-index. - input: - text: text labels of each image. [batch_size] - output: - text: concatenated text index for CTCLoss. - [sum(text_lengths)] = [text_index_0 + text_index_1 + ... + text_index_(n - 1)] - length: length of each text. [batch_size] - """ - length = [len(s) for s in text] - text = ''.join(text) - text = [self.dict[char] for char in text] - - return (torch.IntTensor(text), torch.IntTensor(length)) - - def decode(self, text_index, length): - """ convert text-index into text-label. """ - texts = [] - index = 0 - for l in length: - t = text_index[index:index + l] - - char_list = [] - for i in range(l): - if t[i] != 0 and (not (i > 0 and t[i - 1] == t[i])): # removing repeated characters and blank. - char_list.append(self.character[t[i]]) - text = ''.join(char_list) - - texts.append(text) - index += l - return texts - - -class AttnLabelConverter(object): - """ Convert between text-label and text-index """ - - def __init__(self, character): - # character (str): set of the possible characters. - # [GO] for the start token of the attention decoder. [s] for end-of-sentence token. - list_token = ['[GO]', '[s]'] # ['[s]','[UNK]','[PAD]','[GO]'] - list_character = list(character) - self.character = list_token + list_character - - self.dict = {} - for i, char in enumerate(self.character): - # print(i, char) - self.dict[char] = i - - def encode(self, text, batch_max_length=25): - """ convert text-label into text-index. - input: - text: text labels of each image. [batch_size] - batch_max_length: max length of text label in the batch. 25 by default - - output: - text : the input of attention decoder. [batch_size x (max_length+2)] +1 for [GO] token and +1 for [s] token. - text[:, 0] is [GO] token and text is padded with [GO] token after [s] token. - length : the length of output of attention decoder, which count [s] token also. [3, 7, ....] [batch_size] - """ - length = [len(s) + 1 for s in text] # +1 for [s] at end of sentence. - # batch_max_length = max(length) # this is not allowed for multi-gpu setting - batch_max_length += 1 - # additional +1 for [GO] at first step. batch_text is padded with [GO] token after [s] token. - batch_text = torch.LongTensor(len(text), batch_max_length + 1).fill_(0) - for i, t in enumerate(text): - text = list(t) - text.append('[s]') - text = [self.dict[char] for char in text] - batch_text[i][1:1 + len(text)] = torch.LongTensor(text) # batch_text[:, 0] = [GO] token - return (batch_text.to(device), torch.IntTensor(length).to(device)) - - def decode(self, text_index, length): - """ convert text-index into text-label. """ - texts = [] - for index, l in enumerate(length): - text = ''.join([self.character[i] for i in text_index[index, :]]) - texts.append(text) - return texts - - -class Averager(object): - """Compute average for torch.Tensor, used for loss average.""" - - def __init__(self): - self.reset() - - def add(self, v): - count = v.data.numel() - v = v.data.sum() - self.n_count += count - self.sum += v - - def reset(self): - self.n_count = 0 - self.sum = 0 - - def val(self): - res = 0 - if self.n_count != 0: - res = self.sum / float(self.n_count) - return res diff --git a/spaces/ucalyptus/PTI/models/StyleCLIP/models/facial_recognition/model_irse.py b/spaces/ucalyptus/PTI/models/StyleCLIP/models/facial_recognition/model_irse.py deleted file mode 100644 index b1c79e0366e4a6fd92011e86df80f8b31ec671ae..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/PTI/models/StyleCLIP/models/facial_recognition/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from models.facial_recognition.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/ulysses115/diffsvc_test/infer_tools/slicer.py b/spaces/ulysses115/diffsvc_test/infer_tools/slicer.py deleted file mode 100644 index 35a888b906e7df8634cfdcec914f650c6cefd26a..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/diffsvc_test/infer_tools/slicer.py +++ /dev/null @@ -1,158 +0,0 @@ -import time - -import numpy as np -import torch -import torchaudio -from scipy.ndimage import maximum_filter1d, uniform_filter1d - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -# @timeit -def _window_maximum(arr, win_sz): - return maximum_filter1d(arr, size=win_sz)[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1] - - -# @timeit -def _window_rms(arr, win_sz): - filtered = np.sqrt(uniform_filter1d(np.power(arr, 2), win_sz) - np.power(uniform_filter1d(arr, win_sz), 2)) - return filtered[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1] - - -def level2db(levels, eps=1e-12): - return 20 * np.log10(np.clip(levels, a_min=eps, a_max=1)) - - -def _apply_slice(audio, begin, end): - if len(audio.shape) > 1: - return audio[:, begin: end] - else: - return audio[begin: end] - - -class Slicer: - def __init__(self, - sr: int, - db_threshold: float = -40, - min_length: int = 5000, - win_l: int = 300, - win_s: int = 20, - max_silence_kept: int = 500): - self.db_threshold = db_threshold - self.min_samples = round(sr * min_length / 1000) - self.win_ln = round(sr * win_l / 1000) - self.win_sn = round(sr * win_s / 1000) - self.max_silence = round(sr * max_silence_kept / 1000) - if not self.min_samples >= self.win_ln >= self.win_sn: - raise ValueError('The following condition must be satisfied: min_length >= win_l >= win_s') - if not self.max_silence >= self.win_sn: - raise ValueError('The following condition must be satisfied: max_silence_kept >= win_s') - - @timeit - def slice(self, audio): - samples = audio - if samples.shape[0] <= self.min_samples: - return {"0": {"slice": False, "split_time": f"0,{len(audio)}"}} - # get absolute amplitudes - abs_amp = np.abs(samples - np.mean(samples)) - # calculate local maximum with large window - win_max_db = level2db(_window_maximum(abs_amp, win_sz=self.win_ln)) - sil_tags = [] - left = right = 0 - while right < win_max_db.shape[0]: - if win_max_db[right] < self.db_threshold: - right += 1 - elif left == right: - left += 1 - right += 1 - else: - if left == 0: - split_loc_l = left - else: - sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn)) - split_win_l = left + np.argmin(rms_db_left) - split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn]) - if len(sil_tags) != 0 and split_loc_l - sil_tags[-1][1] < self.min_samples and right < win_max_db.shape[ - 0] - 1: - right += 1 - left = right - continue - if right == win_max_db.shape[0] - 1: - split_loc_r = right + self.win_ln - else: - sil_right_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_right = level2db(_window_rms(samples[right + self.win_ln - sil_right_n: right + self.win_ln], - win_sz=self.win_sn)) - split_win_r = right + self.win_ln - sil_right_n + np.argmin(rms_db_right) - split_loc_r = split_win_r + np.argmin(abs_amp[split_win_r: split_win_r + self.win_sn]) - sil_tags.append((split_loc_l, split_loc_r)) - right += 1 - left = right - if left != right: - sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn)) - split_win_l = left + np.argmin(rms_db_left) - split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn]) - sil_tags.append((split_loc_l, samples.shape[0])) - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(audio)}"}} - else: - chunks = [] - # 第一段静音并非从头开始,补上有声片段 - if sil_tags[0][0]: - chunks.append({"slice": False, "split_time": f"0,{sil_tags[0][0]}"}) - for i in range(0, len(sil_tags)): - # 标识有声片段(跳过第一段) - if i: - chunks.append({"slice": False, "split_time": f"{sil_tags[i - 1][1]},{sil_tags[i][0]}"}) - # 标识所有静音片段 - chunks.append({"slice": True, "split_time": f"{sil_tags[i][0]},{sil_tags[i][1]}"}) - # 最后一段静音并非结尾,补上结尾片段 - if sil_tags[-1][1] != len(audio): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1]},{len(audio)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000, win_l=300, win_s=20, max_sil_kept=500): - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - - slicer = Slicer( - sr=sr, - db_threshold=db_thresh, - min_length=min_len, - win_l=win_l, - win_s=win_s, - max_silence_kept=max_sil_kept - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr - - diff --git a/spaces/umutozdemir/medicalai-ClinicalBERT/app.py b/spaces/umutozdemir/medicalai-ClinicalBERT/app.py deleted file mode 100644 index a4d6b3a40763cd8e1783bf90e43916990be05ffe..0000000000000000000000000000000000000000 --- a/spaces/umutozdemir/medicalai-ClinicalBERT/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/medicalai/ClinicalBERT").launch() \ No newline at end of file diff --git a/spaces/upstage/open-ko-llm-leaderboard/src/load_from_hub.py b/spaces/upstage/open-ko-llm-leaderboard/src/load_from_hub.py deleted file mode 100644 index a64c7344a983e53732bfb5bee0c46b0c7302ab62..0000000000000000000000000000000000000000 --- a/spaces/upstage/open-ko-llm-leaderboard/src/load_from_hub.py +++ /dev/null @@ -1,145 +0,0 @@ -import json -import os - -import pandas as pd -from huggingface_hub import Repository -from transformers import AutoConfig -from collections import defaultdict - -from src.assets.hardcoded_evals import baseline -from src.display_models.get_model_metadata import apply_metadata -from src.display_models.read_results import get_eval_results_dicts, make_clickable_model -from src.display_models.utils import AutoEvalColumn, EvalQueueColumn, has_no_nan_values - - -def get_all_requested_models(requested_models_dir: str) -> set[str]: - depth = 1 - file_names = [] - users_to_submission_dates = defaultdict(list) - - for root, _, files in os.walk(requested_models_dir): - current_depth = root.count(os.sep) - requested_models_dir.count(os.sep) - if current_depth == depth: - for file in files: - if not file.endswith(".json"): continue - with open(os.path.join(root, file), "r") as f: - info = json.load(f) - file_names.append(f"{info['model']}_{info['revision']}_{info['precision']}") - - # Select organisation - if info["model"].count("/") == 0 or "submitted_time" not in info: - continue - organisation, _ = info["model"].split("/") - users_to_submission_dates[organisation].append(info["submitted_time"]) - - return set(file_names), users_to_submission_dates - - -def load_all_info_from_hub(QUEUE_REPO: str, RESULTS_REPO: str, QUEUE_PATH: str, RESULTS_PATH: str) -> list[Repository]: - eval_queue_repo = None - eval_results_repo = None - requested_models = None - - print("Pulling evaluation requests and results.") - - eval_queue_repo = Repository( - local_dir=QUEUE_PATH, - clone_from=QUEUE_REPO, - repo_type="dataset", - ) - eval_queue_repo.git_pull() - - eval_results_repo = Repository( - local_dir=RESULTS_PATH, - clone_from=RESULTS_REPO, - repo_type="dataset", - ) - eval_results_repo.git_pull() - - requested_models, users_to_submission_dates = get_all_requested_models("eval-queue") - - return eval_queue_repo, requested_models, eval_results_repo, users_to_submission_dates - - -def get_leaderboard_df( - eval_results: Repository, eval_results_private: Repository, cols: list, benchmark_cols: list -) -> pd.DataFrame: - if eval_results: - print("Pulling evaluation results for the leaderboard.") - eval_results.git_pull() - if eval_results_private: - print("Pulling evaluation results for the leaderboard.") - eval_results_private.git_pull() - - all_data = get_eval_results_dicts() - - # all_data.append(baseline) - apply_metadata(all_data) # Populate model type based on known hardcoded values in `metadata.py` - - df = pd.DataFrame.from_records(all_data) - df = df.sort_values(by=[AutoEvalColumn.average.name], ascending=False) - df = df[cols].round(decimals=2) - - # filter out if any of the benchmarks have not been produced - df = df[has_no_nan_values(df, benchmark_cols)] - return df - - -def get_evaluation_queue_df( - eval_queue: Repository, eval_queue_private: Repository, save_path: str, cols: list -) -> list[pd.DataFrame]: - if eval_queue: - print("Pulling changes for the evaluation queue.") - eval_queue.git_pull() - if eval_queue_private: - print("Pulling changes for the evaluation queue.") - eval_queue_private.git_pull() - - entries = [entry for entry in os.listdir(save_path) if not entry.startswith(".")] - all_evals = [] - - for entry in entries: - if ".json" in entry: - file_path = os.path.join(save_path, entry) - with open(file_path) as fp: - data = json.load(fp) - - data[EvalQueueColumn.model.name] = make_clickable_model(data["model"]) - data[EvalQueueColumn.revision.name] = data.get("revision", "main") - - all_evals.append(data) - elif ".md" not in entry: - # this is a folder - sub_entries = [e for e in os.listdir(f"{save_path}/{entry}") if not e.startswith(".")] - for sub_entry in sub_entries: - file_path = os.path.join(save_path, entry, sub_entry) - with open(file_path) as fp: - data = json.load(fp) - - data[EvalQueueColumn.model.name] = make_clickable_model(data["model"]) - data[EvalQueueColumn.revision.name] = data.get("revision", "main") - all_evals.append(data) - - pending_list = [e for e in all_evals if e["status"] in ["PENDING", "RERUN"]] - running_list = [e for e in all_evals if e["status"] == "RUNNING"] - finished_list = [e for e in all_evals if e["status"].startswith("FINISHED") or e["status"] == "PENDING_NEW_EVAL"] - df_pending = pd.DataFrame.from_records(pending_list, columns=cols) - df_running = pd.DataFrame.from_records(running_list, columns=cols) - df_finished = pd.DataFrame.from_records(finished_list, columns=cols) - return df_finished[cols], df_running[cols], df_pending[cols] - - -def is_model_on_hub(model_name: str, revision: str) -> bool: - try: - AutoConfig.from_pretrained(model_name, revision=revision, trust_remote_code=False) - return True, None - - except ValueError: - return ( - False, - "needs to be launched with `trust_remote_code=True`. For safety reason, we do not allow these models to be automatically submitted to the leaderboard.", - ) - - except Exception as e: - print(f"Could not get the model config from the hub.: {e}") - return False, "was not found on hub!" diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Cingiz Abdullayev Mavi Melekler Yukle Pdf tribehotyoga[3].md b/spaces/usbethFlerru/sovits-modelsV2/example/Cingiz Abdullayev Mavi Melekler Yukle Pdf tribehotyoga[3].md deleted file mode 100644 index 77d7a7bd6a62090899db09594b335a9c80c61a8a..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Cingiz Abdullayev Mavi Melekler Yukle Pdf tribehotyoga[3].md +++ /dev/null @@ -1,6 +0,0 @@ -

            Cengiz Abdullayev Mavi Melekler Pdf 81


            Download Zip ✶✶✶ https://urlcod.com/2uyXs6



            -
            - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/classify/train.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/classify/train.py deleted file mode 100644 index 72feb55913d2eabc097ab78f628533eb315857f3..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/classify/train.py +++ /dev/null @@ -1,161 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -import torch -import torchvision - -from ultralytics.nn.tasks import ClassificationModel, attempt_load_one_weight -from ultralytics.yolo import v8 -from ultralytics.yolo.data import ClassificationDataset, build_dataloader -from ultralytics.yolo.engine.trainer import BaseTrainer -from ultralytics.yolo.utils import DEFAULT_CFG, LOGGER, RANK, colorstr -from ultralytics.yolo.utils.plotting import plot_images, plot_results -from ultralytics.yolo.utils.torch_utils import is_parallel, strip_optimizer, torch_distributed_zero_first - - -class ClassificationTrainer(BaseTrainer): - - def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): - """Initialize a ClassificationTrainer object with optional configuration overrides and callbacks.""" - if overrides is None: - overrides = {} - overrides['task'] = 'classify' - if overrides.get('imgsz') is None: - overrides['imgsz'] = 224 - super().__init__(cfg, overrides, _callbacks) - - def set_model_attributes(self): - """Set the YOLO model's class names from the loaded dataset.""" - self.model.names = self.data['names'] - - def get_model(self, cfg=None, weights=None, verbose=True): - """Returns a modified PyTorch model configured for training YOLO.""" - model = ClassificationModel(cfg, nc=self.data['nc'], verbose=verbose and RANK == -1) - if weights: - model.load(weights) - - for m in model.modules(): - if not self.args.pretrained and hasattr(m, 'reset_parameters'): - m.reset_parameters() - if isinstance(m, torch.nn.Dropout) and self.args.dropout: - m.p = self.args.dropout # set dropout - for p in model.parameters(): - p.requires_grad = True # for training - return model - - def setup_model(self): - """ - load/create/download model for any task - """ - # Classification models require special handling - - if isinstance(self.model, torch.nn.Module): # if model is loaded beforehand. No setup needed - return - - model = str(self.model) - # Load a YOLO model locally, from torchvision, or from Ultralytics assets - if model.endswith('.pt'): - self.model, _ = attempt_load_one_weight(model, device='cpu') - for p in self.model.parameters(): - p.requires_grad = True # for training - elif model.endswith('.yaml'): - self.model = self.get_model(cfg=model) - elif model in torchvision.models.__dict__: - self.model = torchvision.models.__dict__[model](weights='IMAGENET1K_V1' if self.args.pretrained else None) - else: - FileNotFoundError(f'ERROR: model={model} not found locally or online. Please check model name.') - ClassificationModel.reshape_outputs(self.model, self.data['nc']) - - return # dont return ckpt. Classification doesn't support resume - - def build_dataset(self, img_path, mode='train', batch=None): - return ClassificationDataset(root=img_path, args=self.args, augment=mode == 'train') - - def get_dataloader(self, dataset_path, batch_size=16, rank=0, mode='train'): - """Returns PyTorch DataLoader with transforms to preprocess images for inference.""" - with torch_distributed_zero_first(rank): # init dataset *.cache only once if DDP - dataset = self.build_dataset(dataset_path, mode) - - loader = build_dataloader(dataset, batch_size, self.args.workers, rank=rank) - # Attach inference transforms - if mode != 'train': - if is_parallel(self.model): - self.model.module.transforms = loader.dataset.torch_transforms - else: - self.model.transforms = loader.dataset.torch_transforms - return loader - - def preprocess_batch(self, batch): - """Preprocesses a batch of images and classes.""" - batch['img'] = batch['img'].to(self.device) - batch['cls'] = batch['cls'].to(self.device) - return batch - - def progress_string(self): - """Returns a formatted string showing training progress.""" - return ('\n' + '%11s' * (4 + len(self.loss_names))) % \ - ('Epoch', 'GPU_mem', *self.loss_names, 'Instances', 'Size') - - def get_validator(self): - """Returns an instance of ClassificationValidator for validation.""" - self.loss_names = ['loss'] - return v8.classify.ClassificationValidator(self.test_loader, self.save_dir) - - def label_loss_items(self, loss_items=None, prefix='train'): - """ - Returns a loss dict with labelled training loss items tensor - """ - # Not needed for classification but necessary for segmentation & detection - keys = [f'{prefix}/{x}' for x in self.loss_names] - if loss_items is None: - return keys - loss_items = [round(float(loss_items), 5)] - return dict(zip(keys, loss_items)) - - def resume_training(self, ckpt): - """Resumes training from a given checkpoint.""" - pass - - def plot_metrics(self): - """Plots metrics from a CSV file.""" - plot_results(file=self.csv, classify=True, on_plot=self.on_plot) # save results.png - - def final_eval(self): - """Evaluate trained model and save validation results.""" - for f in self.last, self.best: - if f.exists(): - strip_optimizer(f) # strip optimizers - # TODO: validate best.pt after training completes - # if f is self.best: - # LOGGER.info(f'\nValidating {f}...') - # self.validator.args.save_json = True - # self.metrics = self.validator(model=f) - # self.metrics.pop('fitness', None) - # self.run_callbacks('on_fit_epoch_end') - LOGGER.info(f"Results saved to {colorstr('bold', self.save_dir)}") - - def plot_training_samples(self, batch, ni): - """Plots training samples with their annotations.""" - plot_images(images=batch['img'], - batch_idx=torch.arange(len(batch['img'])), - cls=batch['cls'].squeeze(-1), - fname=self.save_dir / f'train_batch{ni}.jpg', - on_plot=self.on_plot) - - -def train(cfg=DEFAULT_CFG, use_python=False): - """Train the YOLO classification model.""" - model = cfg.model or 'yolov8n-cls.pt' # or "resnet18" - data = cfg.data or 'mnist160' # or yolo.ClassificationDataset("mnist") - device = cfg.device if cfg.device is not None else '' - - args = dict(model=model, data=data, device=device) - if use_python: - from ultralytics import YOLO - YOLO(model).train(**args) - else: - trainer = ClassificationTrainer(overrides=args) - trainer.train() - - -if __name__ == '__main__': - train() diff --git a/spaces/valhalla/minDALLE/dalle/models/stage1/vqgan.py b/spaces/valhalla/minDALLE/dalle/models/stage1/vqgan.py deleted file mode 100644 index 7f03a4d02aa579275d58290bc4f3714fd58bfe00..0000000000000000000000000000000000000000 --- a/spaces/valhalla/minDALLE/dalle/models/stage1/vqgan.py +++ /dev/null @@ -1,93 +0,0 @@ -# ------------------------------------------------------------------------------------ -# Modified from VQGAN (https://github.com/CompVis/taming-transformers) -# Copyright (c) 2020 Patrick Esser and Robin Rombach and Björn Ommer. All Rights Reserved. -# ------------------------------------------------------------------------------------ - -import torch -import torch.nn as nn -from typing import List, Tuple, Optional -from einops import rearrange -from omegaconf import OmegaConf -from .layers import Encoder, Decoder - - -class VectorQuantizer(nn.Module): - """ - Simplified VectorQuantizer in the original VQGAN repository - by removing unncessary modules for sampling - """ - def __init__(self, dim: int, n_embed: int, beta: float) -> None: - super().__init__() - self.n_embed = n_embed - self.dim = dim - self.beta = beta - - self.embedding = nn.Embedding(self.n_embed, self.dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_embed, 1.0 / self.n_embed) - - def forward(self, - z: torch.FloatTensor) -> Tuple[torch.FloatTensor, torch.LongTensor]: - z = rearrange(z, 'b c h w -> b h w c').contiguous() # [B,C,H,W] -> [B,H,W,C] - z_flattened = z.view(-1, self.dim) - - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.einsum('bd,dn->bn', z_flattened, rearrange(self.embedding.weight, 'n d -> d n')) - - min_encoding_indices = torch.argmin(d, dim=1) - z_q = self.embedding(min_encoding_indices).view(z.shape) - return z_q, min_encoding_indices - - def get_codebook_entry(self, - indices: torch.LongTensor, - shape: Optional[List[int]] = None) -> torch.FloatTensor: - z_q = self.embedding(indices) - if shape is not None: - z_q = z_q.view(shape) - z_q = z_q.permute(0, 3, 1, 2).contiguous() - return z_q - - -class VQGAN(nn.Module): - def __init__(self, n_embed: int, embed_dim: int, hparams: OmegaConf) -> None: - super().__init__() - self.encoder = Encoder(**hparams) - self.decoder = Decoder(**hparams) - self.quantize = VectorQuantizer(dim=embed_dim, n_embed=n_embed, beta=0.25) - self.quant_conv = torch.nn.Conv2d(hparams.z_channels, embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, hparams.z_channels, 1) - self.latent_dim = hparams.attn_resolutions[0] - - def forward(self, x: torch.FloatTensor) -> torch.FloatTensor: - quant = self.encode(x) - dec = self.decode(quant) - return dec - - def encode(self, x: torch.FloatTensor) -> torch.FloatTensor: - h = self.encoder(x) - h = self.quant_conv(h) - quant = self.quantize(h)[0] - quant = rearrange(quant, 'b h w c -> b c h w').contiguous() - return quant - - def decode(self, quant: torch.FloatTensor) -> torch.FloatTensor: - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - return dec - - def decode_code(self, code: torch.LongTensor) -> torch.FloatTensor: - quant = self.quantize.get_codebook_entry(code) - quant = quant.permute(0, 3, 1, 2) - dec = self.decode(quant) - return dec - - def get_codes(self, x: torch.FloatTensor) -> torch.LongTensor: - h = self.encoder(x) - h = self.quant_conv(h) - codes = self.quantize(h)[1].view(x.shape[0], self.latent_dim ** 2) - return codes - - def from_ckpt(self, path: str, strict: bool = True) -> None: - ckpt = torch.load(path, map_location='cpu')['state_dict'] - self.load_state_dict(ckpt, strict=strict) - print(f'{path} successfully restored..') diff --git a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/explore_cvae.py b/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/explore_cvae.py deleted file mode 100644 index 29a89f6e5cb6cf13e983e85693749be37f1c64f2..0000000000000000000000000000000000000000 --- a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/explore_cvae.py +++ /dev/null @@ -1,247 +0,0 @@ -import math - -import streamlit as st -import numpy as np - -import torch -import torch.nn.functional as F - -import src.app.params as params -from src.app.questions import q1, q1_options, q2, q2_options, q3, q3_options, q4, q4_options, q5, q5_options, \ - q6, q6_options, q7, q7_options, q8, q8_options, q9, q9_options, q10, q10_options, q11, q11_options -from src.models import ConditionalDecoder -from src.data import get_labels_train, make_galaxy_labels_hierarchical -from src.utils import sample_labels - - -# global parameters -device = params.device -size = params.size -y_size = shape_label = params.shape_label -n_channels = params.n_channels -upsample = params.upsample -dim_z = params.dim_z -bs = 16 # number of samples to generate -n_cols = int(math.sqrt(bs)) -model_path = params.path_cvae -path_labels = params.path_labels - -# manual labels -q1_out = [0] * len(q1_options) -q2_out = [0] * len(q2_options) -q3_out = [0] * len(q3_options) -q4_out = [0] * len(q4_options) -q5_out = [0] * len(q5_options) -q6_out = [0] * len(q6_options) -q7_out = [0] * len(q7_options) -q8_out = [0] * len(q8_options) -q9_out = [0] * len(q9_options) -q10_out = [0] * len(q10_options) -q11_out = [0] * len(q11_options) - - -def clear_out(elems=None): - global q1_out, q2_out, q3_out, q4_out, q5_out, q6_out, q6_out, q7_out, q8_out, q9_out, q10_out, q11_out - - if elems is None: - elems = list(range(1, 12)) - - if 1 in elems: - q1_out = [0] * len(q1_options) - if 2 in elems: - q2_out = [0] * len(q2_options) - if 3 in elems: - q3_out = [0] * len(q3_options) - if 4 in elems: - q4_out = [0] * len(q4_options) - if 5 in elems: - q5_out = [0] * len(q5_options) - if 6 in elems: - q6_out = [0] * len(q6_options) - if 7 in elems: - q7_out = [0] * len(q7_options) - if 8 in elems: - q8_out = [0] * len(q8_options) - if 9 in elems: - q9_out = [0] * len(q9_options) - if 10 in elems: - q10_out = [0] * len(q10_options) - if 11 in elems: - q11_out = [0] * len(q11_options) - - -@st.cache(allow_output_mutation=True) -def load_model(model_path: str) -> ConditionalDecoder: - - print(f'Loading model: {model_path}') - g = ConditionalDecoder() - ckpt = torch.load(model_path, map_location=torch.device('cpu')) - g.load_state_dict(ckpt) - g.eval().to(device) - return g - - -def get_eps(n: int) -> torch.Tensor: - eps = torch.randn((n, dim_z), device=device) - return eps - - -@st.cache -def get_labels() -> torch.Tensor: - labels_train = get_labels_train(path_labels) - return labels_train - - -def app(): - global q1_out, q2_out, q3_out, q4_out, q5_out, q6_out, q6_out, q7_out, q8_out, q9_out, q10_out, q11_out - - st.title('Explore cVAE') - st.markdown('This demo shows cVAE for conditional galaxy generation') - - model = load_model(model_path) - eps = get_eps(bs) - labels_train = get_labels() - - # ========================== Labels ================================ - st.subheader('Label') - st.markdown(r'There are two types of selecting labels: __Random__ - sample random samples from the dataset;' - r' __Manual__ - select labels manually (advanced use). When using __Manual__ all of the images will be' - r' generated with tha same labels') - label_type = st.radio('Label type', options=['Random', 'Manual (Advanced)']) - if label_type == 'Random': - labels = sample_labels(labels_train, bs).to(device) - - st.markdown(r'Click on __Sample labels__ button to sample random input labels') - change_label = st.button('Sample label') - - if change_label: - labels = sample_labels(labels_train, bs).to(device) - elif label_type == 'Manual (Advanced)': - st.markdown('Answer the questions below') - - q1_select_box = st.selectbox(q1, options=q1_options) - clear_out() - q1_out[q1_options.index(q1_select_box)] = 1 - # 1 - - if q1_select_box == 'Smooth': - q7_select_box = st.selectbox(q7, options=q7_options) - clear_out([2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) - q7_out[q7_options.index(q7_select_box)] = 1 - # 1 - 7 - - q6_select_box = st.selectbox(q6, options=q6_options) - clear_out([2, 3, 4, 5, 6, 8, 9, 10, 11]) - q6_out[q6_options.index(q6_select_box)] = 1 - # 1 - 7 - 6 - - if q6_select_box == 'Yes': - q8_select_box = st.selectbox(q8, options=q8_options) - clear_out([2, 3, 4, 5, 8, 9, 10, 11]) - q8_out[q8_options.index(q8_select_box)] = 1 - # 1 - 7 - 6 - 8 - end - - elif q1_select_box == 'Features or disk': - q2_select_box = st.selectbox(q2, options=q2_options) - clear_out([2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) - q2_out[q2_options.index(q2_select_box)] = 1 - # 1 - 2 - - if q2_select_box == 'Yes': - q9_select_box = st.selectbox(q9, options=q9_options) - clear_out([3, 4, 5, 6, 7, 8, 9, 10, 11]) - q9_out[q9_options.index(q9_select_box)] = 1 - # 1 - 2 - 9 - - q6_select_box = st.selectbox(q6, options=q6_options) - clear_out([3, 4, 5, 6, 7, 8, 10, 11]) - q6_out[q6_options.index(q6_select_box)] = 1 - # 1 - 2 - 9 - 6 - - if q6_select_box == 'Yes': - q8_select_box = st.selectbox(q8, options=q8_options) - clear_out([3, 4, 5, 7, 8, 10, 11]) - q8_out[q8_options.index(q8_select_box)] = 1 - # 1 - 2 - 9 - 6 - 8 - else: - q3_select_box = st.selectbox(q3, options=q3_options) - clear_out([3, 4, 5, 6, 7, 8, 9, 10, 11]) - q3_out[q3_options.index(q3_select_box)] = 1 - # 1 - 2 - 3 - - q4_select_box = st.selectbox(q4, options=q4_options) - clear_out([4, 5, 6, 7, 8, 9, 10, 11]) - q4_out[q4_options.index(q4_select_box)] = 1 - # 1 - 2 - 3 - 4 - - if q4_select_box == 'Yes': - q10_select_box = st.selectbox(q10, options=q10_options) - clear_out([5, 6, 7, 8, 9, 10, 11]) - q10_out[q10_options.index(q10_select_box)] = 1 - # 1 - 2 - 3 - 4 - 10 - - q11_select_box = st.selectbox(q11, options=q11_options) - clear_out([5, 6, 7, 8, 9, 11]) - q11_out[q11_options.index(q11_select_box)] = 1 - # 1 - 2 - 3 - 4 - 10 - 11 - - q5_select_box = st.selectbox(q5, options=q5_options) - clear_out([5, 6, 7, 8, 9]) - q5_out[q5_options.index(q5_select_box)] = 1 - # 1 - 2 - 3 - 4 - 10 - 11 - 5 - - q6_select_box = st.selectbox(q6, options=q6_options) - clear_out([6, 7, 8, 9]) - q6_out[q6_options.index(q6_select_box)] = 1 - # 1 - 2 - 3 - 4 - 10 - 11 - 5 - 6 - - if q6_select_box == 'Yes': - q8_select_box = st.selectbox(q8, options=q8_options) - clear_out([7, 8, 9]) - q8_out[q8_options.index(q8_select_box)] = 1 - # 1 - 2 - 3 - 4 - 10 - 11 - 5 - 6 - 8 - End - else: - q5_select_box = st.selectbox(q5, options=q5_options) - clear_out([5, 6, 7, 8, 9, 10, 11]) - q5_out[q5_options.index(q5_select_box)] = 1 - # 1 - 2 - 3 - 4 - 5 - - q6_select_box = st.selectbox(q6, options=q6_options) - clear_out([6, 7, 8, 9, 10, 11]) - q6_out[q6_options.index(q6_select_box)] = 1 - # 1 - 2 - 3 - 4 - 5 - 6 - - if q6_select_box == 'Yes': - q8_select_box = st.selectbox(q8, options=q8_options) - clear_out([7, 8, 9, 10, 11]) - q8_out[q8_options.index(q8_select_box)] = 1 - # 1 - 2 - 3 - 4 - 5 - 6 - 8 - End - - labels = [*q1_out, *q2_out, *q3_out, *q4_out, *q5_out, *q6_out, *q7_out, *q8_out, *q9_out, *q10_out, *q11_out] - labels = torch.Tensor(labels).to(device) - labels = labels.unsqueeze(0).repeat(bs, 1) - labels = make_galaxy_labels_hierarchical(labels) - clear_out() - # ========================== Labels ================================ - - st.subheader('Noise') - st.markdown(r'Click on __Change eps__ button to change input $\varepsilon$ latent space') - change_eps = st.button('Change eps') - if change_eps: - eps = get_eps(bs) - - with torch.no_grad(): - imgs = model(eps, labels) - - if upsample: - imgs = F.interpolate(imgs, (size * 4, size * 4), mode='bicubic') - - imgs = [(imgs[i].permute(1, 2, 0).numpy() * 127.5 + 127.5).astype(np.uint8) for i in range(bs)] - - counter = 0 - for r in range(bs // n_cols): - cols = st.columns(n_cols) - - for c in range(n_cols): - cols[c].image(imgs[counter]) - counter += 1 diff --git a/spaces/vtk51/Lama-Cleaner-lama/README.md b/spaces/vtk51/Lama-Cleaner-lama/README.md deleted file mode 100644 index 34fec6eb0c7e0b523863096b4835b8e25bb4ba52..0000000000000000000000000000000000000000 --- a/spaces/vtk51/Lama-Cleaner-lama/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Lama Cleaner Lama -emoji: ⚡ -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: Sanster/Lama-Cleaner-lama ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vumichien/Generate_human_motion/VQ-Trans/checkpoints/train_vq.py b/spaces/vumichien/Generate_human_motion/VQ-Trans/checkpoints/train_vq.py deleted file mode 100644 index d89b9930ba1262747542df3d5b2f03f8fab1b04a..0000000000000000000000000000000000000000 --- a/spaces/vumichien/Generate_human_motion/VQ-Trans/checkpoints/train_vq.py +++ /dev/null @@ -1,171 +0,0 @@ -import os -import json - -import torch -import torch.optim as optim -from torch.utils.tensorboard import SummaryWriter - -import models.vqvae as vqvae -import utils.losses as losses -import options.option_vq as option_vq -import utils.utils_model as utils_model -from dataset import dataset_VQ, dataset_TM_eval -import utils.eval_trans as eval_trans -from options.get_eval_option import get_opt -from models.evaluator_wrapper import EvaluatorModelWrapper -import warnings -warnings.filterwarnings('ignore') -from utils.word_vectorizer import WordVectorizer - -def update_lr_warm_up(optimizer, nb_iter, warm_up_iter, lr): - - current_lr = lr * (nb_iter + 1) / (warm_up_iter + 1) - for param_group in optimizer.param_groups: - param_group["lr"] = current_lr - - return optimizer, current_lr - -##### ---- Exp dirs ---- ##### -args = option_vq.get_args_parser() -torch.manual_seed(args.seed) - -args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}') -os.makedirs(args.out_dir, exist_ok = True) - -##### ---- Logger ---- ##### -logger = utils_model.get_logger(args.out_dir) -writer = SummaryWriter(args.out_dir) -logger.info(json.dumps(vars(args), indent=4, sort_keys=True)) - - - -w_vectorizer = WordVectorizer('./glove', 'our_vab') - -if args.dataname == 'kit' : - dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt' - args.nb_joints = 21 - -else : - dataset_opt_path = 'checkpoints/t2m/Comp_v6_KLD005/opt.txt' - args.nb_joints = 22 - -logger.info(f'Training on {args.dataname}, motions are with {args.nb_joints} joints') - -wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda')) -eval_wrapper = EvaluatorModelWrapper(wrapper_opt) - - -##### ---- Dataloader ---- ##### -train_loader = dataset_VQ.DATALoader(args.dataname, - args.batch_size, - window_size=args.window_size, - unit_length=2**args.down_t) - -train_loader_iter = dataset_VQ.cycle(train_loader) - -val_loader = dataset_TM_eval.DATALoader(args.dataname, False, - 32, - w_vectorizer, - unit_length=2**args.down_t) - -##### ---- Network ---- ##### -net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers - args.nb_code, - args.code_dim, - args.output_emb_width, - args.down_t, - args.stride_t, - args.width, - args.depth, - args.dilation_growth_rate, - args.vq_act, - args.vq_norm) - - -if args.resume_pth : - logger.info('loading checkpoint from {}'.format(args.resume_pth)) - ckpt = torch.load(args.resume_pth, map_location='cpu') - net.load_state_dict(ckpt['net'], strict=True) -net.train() -net.cuda() - -##### ---- Optimizer & Scheduler ---- ##### -optimizer = optim.AdamW(net.parameters(), lr=args.lr, betas=(0.9, 0.99), weight_decay=args.weight_decay) -scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=args.lr_scheduler, gamma=args.gamma) - - -Loss = losses.ReConsLoss(args.recons_loss, args.nb_joints) - -##### ------ warm-up ------- ##### -avg_recons, avg_perplexity, avg_commit = 0., 0., 0. - -for nb_iter in range(1, args.warm_up_iter): - - optimizer, current_lr = update_lr_warm_up(optimizer, nb_iter, args.warm_up_iter, args.lr) - - gt_motion = next(train_loader_iter) - gt_motion = gt_motion.cuda().float() # (bs, 64, dim) - - pred_motion, loss_commit, perplexity = net(gt_motion) - loss_motion = Loss(pred_motion, gt_motion) - loss_vel = Loss.forward_vel(pred_motion, gt_motion) - - loss = loss_motion + args.commit * loss_commit + args.loss_vel * loss_vel - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - avg_recons += loss_motion.item() - avg_perplexity += perplexity.item() - avg_commit += loss_commit.item() - - if nb_iter % args.print_iter == 0 : - avg_recons /= args.print_iter - avg_perplexity /= args.print_iter - avg_commit /= args.print_iter - - logger.info(f"Warmup. Iter {nb_iter} : lr {current_lr:.5f} \t Commit. {avg_commit:.5f} \t PPL. {avg_perplexity:.2f} \t Recons. {avg_recons:.5f}") - - avg_recons, avg_perplexity, avg_commit = 0., 0., 0. - -##### ---- Training ---- ##### -avg_recons, avg_perplexity, avg_commit = 0., 0., 0. -best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_vqvae(args.out_dir, val_loader, net, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, eval_wrapper=eval_wrapper) - -for nb_iter in range(1, args.total_iter + 1): - - gt_motion = next(train_loader_iter) - gt_motion = gt_motion.cuda().float() # bs, nb_joints, joints_dim, seq_len - - pred_motion, loss_commit, perplexity = net(gt_motion) - loss_motion = Loss(pred_motion, gt_motion) - loss_vel = Loss.forward_vel(pred_motion, gt_motion) - - loss = loss_motion + args.commit * loss_commit + args.loss_vel * loss_vel - - optimizer.zero_grad() - loss.backward() - optimizer.step() - scheduler.step() - - avg_recons += loss_motion.item() - avg_perplexity += perplexity.item() - avg_commit += loss_commit.item() - - if nb_iter % args.print_iter == 0 : - avg_recons /= args.print_iter - avg_perplexity /= args.print_iter - avg_commit /= args.print_iter - - writer.add_scalar('./Train/L1', avg_recons, nb_iter) - writer.add_scalar('./Train/PPL', avg_perplexity, nb_iter) - writer.add_scalar('./Train/Commit', avg_commit, nb_iter) - - logger.info(f"Train. Iter {nb_iter} : \t Commit. {avg_commit:.5f} \t PPL. {avg_perplexity:.2f} \t Recons. {avg_recons:.5f}") - - avg_recons, avg_perplexity, avg_commit = 0., 0., 0., - - if nb_iter % args.eval_iter==0 : - best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_vqvae(args.out_dir, val_loader, net, logger, writer, nb_iter, best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, eval_wrapper=eval_wrapper) - \ No newline at end of file diff --git a/spaces/vumichien/Generate_human_motion/pyrender/tests/unit/__init__.py b/spaces/vumichien/Generate_human_motion/pyrender/tests/unit/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/weibinke/vits-simple-api/bert_vits2/text/tone_sandhi.py b/spaces/weibinke/vits-simple-api/bert_vits2/text/tone_sandhi.py deleted file mode 100644 index c0a78a52818cff976eee838d3724a41730421a57..0000000000000000000000000000000000000000 --- a/spaces/weibinke/vits-simple-api/bert_vits2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i - 1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/wseo/i18n-huggingface/README.md b/spaces/wseo/i18n-huggingface/README.md deleted file mode 100644 index 9644c045232c14c8d3b68e6592fbf9e210a3fd18..0000000000000000000000000000000000000000 --- a/spaces/wseo/i18n-huggingface/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: HuggingFace docs i18n -emoji: 🌍 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: true -license: apache-2.0 -duplicated_from: Hyeonseo/ChatGPT-ko-translation-prompt ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/xfys/yolov5_tracking/val_utils/trackeval/baselines/thresholder.py b/spaces/xfys/yolov5_tracking/val_utils/trackeval/baselines/thresholder.py deleted file mode 100644 index c589e10b95da311c03ed1045bc1d6af8f1a8c90e..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/val_utils/trackeval/baselines/thresholder.py +++ /dev/null @@ -1,92 +0,0 @@ -""" -Thresholder - -Author: Jonathon Luiten - -Simply reads in a set of detection, thresholds them at a certain score threshold, and writes them out again. -""" - -import os -import sys -from multiprocessing.pool import Pool -from multiprocessing import freeze_support - -sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))) -from trackeval.baselines import baseline_utils as butils -from trackeval.utils import get_code_path - -THRESHOLD = 0.2 - -code_path = get_code_path() -config = { - 'INPUT_FOL': os.path.join(code_path, 'data/detections/rob_mots/{split}/non_overlap_supplied/data/'), - 'OUTPUT_FOL': os.path.join(code_path, 'data/detections/rob_mots/{split}/threshold_' + str(100*THRESHOLD) + '/data/'), - 'SPLIT': 'train', # valid: 'train', 'val', 'test'. - 'Benchmarks': None, # If None, all benchmarks in SPLIT. - - 'Num_Parallel_Cores': None, # If None, run without parallel. - - 'DETECTION_THRESHOLD': THRESHOLD, -} - - -def do_sequence(seq_file): - - # Load input data from file (e.g. provided detections) - # data format: data['cls'][t] = {'ids', 'scores', 'im_hs', 'im_ws', 'mask_rles'} - data = butils.load_seq(seq_file) - - # Where to accumulate output data for writing out - output_data = [] - - # Run for each class. - for cls, cls_data in data.items(): - - # Run for each timestep. - for timestep, t_data in enumerate(cls_data): - - # Threshold detections. - t_data = butils.threshold(t_data, config['DETECTION_THRESHOLD']) - - # Save result in output format to write to file later. - # Output Format = [timestep ID class score im_h im_w mask_RLE] - for i in range(len(t_data['ids'])): - row = [timestep, int(t_data['ids'][i]), cls, t_data['scores'][i], t_data['im_hs'][i], - t_data['im_ws'][i], t_data['mask_rles'][i]] - output_data.append(row) - - # Write results to file - out_file = seq_file.replace(config['INPUT_FOL'].format(split=config['SPLIT']), - config['OUTPUT_FOL'].format(split=config['SPLIT'])) - butils.write_seq(output_data, out_file) - - print('DONE:', seq_todo) - - -if __name__ == '__main__': - - # Required to fix bug in multiprocessing on windows. - freeze_support() - - # Obtain list of sequences to run tracker for. - if config['Benchmarks']: - benchmarks = config['Benchmarks'] - else: - benchmarks = ['davis_unsupervised', 'kitti_mots', 'youtube_vis', 'ovis', 'bdd_mots', 'tao'] - if config['SPLIT'] != 'train': - benchmarks += ['waymo', 'mots_challenge'] - seqs_todo = [] - for bench in benchmarks: - bench_fol = os.path.join(config['INPUT_FOL'].format(split=config['SPLIT']), bench) - seqs_todo += [os.path.join(bench_fol, seq) for seq in os.listdir(bench_fol)] - - # Run in parallel - if config['Num_Parallel_Cores']: - with Pool(config['Num_Parallel_Cores']) as pool: - results = pool.map(do_sequence, seqs_todo) - - # Run in series - else: - for seq_todo in seqs_todo: - do_sequence(seq_todo) - diff --git a/spaces/xiaolongbaox/gpt2.0/modules/presets.py b/spaces/xiaolongbaox/gpt2.0/modules/presets.py deleted file mode 100644 index 386caead42629ad89ecbe531317370a18dcb13df..0000000000000000000000000000000000000000 --- a/spaces/xiaolongbaox/gpt2.0/modules/presets.py +++ /dev/null @@ -1,198 +0,0 @@ -# -*- coding:utf-8 -*- -import gradio as gr -from pathlib import Path - -# ChatGPT 设置 -initial_prompt = "You are a helpful assistant." -API_HOST = "api.openai.com" -COMPLETION_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = Path("history") -TEMPLATES_DIR = "templates" - -# 错误信息 -standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀 -error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误 -connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时 -read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时 -proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误 -ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误 -no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位 -no_input_msg = "请输入对话内容。" # 未输入对话内容 - -timeout_streaming = 10 # 流式对话时的超时时间 -timeout_all = 200 # 非流式对话时的超时时间 -enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -title = """

            川虎ChatGPT 🚀

            """ -description = """\ -
            - -由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发 - -访问川虎ChatGPT的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本 - -此App使用 `gpt-3.5-turbo` 大语言模型 -
            -""" - -footer = """\ -
            {versions}
            -""" - -summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", -] # 可选的模型 - -MODEL_SOFT_TOKEN_LIMIT = { - "gpt-3.5-turbo": { - "streaming": 3500, - "all": 3500 - }, - "gpt-3.5-turbo-0301": { - "streaming": 3500, - "all": 3500 - }, - "gpt-4": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-0314": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-32k": { - "streaming": 31000, - "all": 31000 - }, - "gpt-4-32k-0314": { - "streaming": 31000, - "all": 31000 - } -} - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/xin/PatentSolver/App/bin/ParamProcessor.py b/spaces/xin/PatentSolver/App/bin/ParamProcessor.py deleted file mode 100644 index b36af0f8ab4d000053bc18ab3cf9716155c1edc8..0000000000000000000000000000000000000000 --- a/spaces/xin/PatentSolver/App/bin/ParamProcessor.py +++ /dev/null @@ -1,99 +0,0 @@ -# -*- coding: utf-8 -*- - -import json -import os -import re -import matplotlib.pyplot as plt -import numpy as np -from io import StringIO -from App4api.bin import constants -from collections import OrderedDict -from App4api.bin.InformationExtractor import InformationExtractor -from App4api.bin.ParameterExtractor import ParameterExtractor -from App4api.bin.TechnologyFinder import TechnologyFinder - -class ParamProcessor(object): - - def __init__(self, patents,input_folder, file_extension): - self.patents = patents - self.input_folder = input_folder - self.file_extension = file_extension - print("Processing started") - - def change_keys(self, dictionnary, number): - number = number+'-' - if type(dictionnary) is dict: - return dict([(number+str(k) , self.change_keys(v, number)) for k, v in dictionnary.items()]) - else: - return dictionnary - - def process_corpus(self): - - count_patent = 0 - patents = self.patents - input_folder = self.input_folder - project_folder = os.path.basename(os.path.normpath(input_folder)) - graph_folder = constants.GRAPH_FOLDER + project_folder+"/" - output_result = [] - parameters_graph = [] - reduced_content = [] - patent_corpus = [] - source_list = [] - parameters_list =[] - - - for patent_file in patents: - - read_patent = StringIO(patent_file) - patent = json.load(read_patent) - nNumber = patent['number'] - aAbstract = patent['abstract'] - cClaims = patent['claims'] - dDescription = patent['description'] - source = patent['source'] - - patent_content = aAbstract + cClaims + dDescription - patent_content = patent_content.splitlines() - - for line in patent_content: - get_parameters = ParameterExtractor(line) - parameters = get_parameters.extract_parameters() - if parameters: - parameters_list.extend( parameters) - - - parameters_list=list(set(parameters_list)) - - parameters = dict(enumerate(parameters_list, 1)) - - parameters = self.change_keys(parameters, nNumber.lower()) - - parameters_array = OrderedDict({ - "concept": { - "source": source, - "valeurs": parameters, - - } - - }) - pParameters= json.dumps(parameters_array, sort_keys=OrderedDict, indent=4, separators=(',', ': ')) - parameters_graph.append(pParameters) - count_patent +=1 - source_list.append(source) - patent_corpus.append(reduced_content) - - header = '{' - parameters_output = '"parameters": [%s]' % ','.join(parameters_graph) - footer = '}' - output_result.extend((header, parameters_output, footer)) - - output_result = "".join(output_result) - concepts_json = json.loads(output_result) - - - json_write_to_file = json.dumps(concepts_json, sort_keys=False, indent=4, separators=(',', ': ')) - - with open(graph_folder+"parameters-graph.json", 'w') as json_graph: - json_graph.write(json_write_to_file) - - return concepts_json \ No newline at end of file diff --git a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/setup.py b/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/setup.py deleted file mode 100644 index 32a4c9c9b72a15b1a4e1ad0cc83308fb9f465426..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/setup.py +++ /dev/null @@ -1,118 +0,0 @@ -#!/usr/bin/env python - -from setuptools import find_packages, setup - -import os -import subprocess -import time - -version_file = "realesrgan/version.py" - - -def readme(): - with open("README.md", encoding="utf-8") as f: - content = f.read() - return content - - -def get_git_hash(): - def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ["SYSTEMROOT", "PATH", "HOME"]: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env["LANGUAGE"] = "C" - env["LANG"] = "C" - env["LC_ALL"] = "C" - out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0] - return out - - try: - out = _minimal_ext_cmd(["git", "rev-parse", "HEAD"]) - sha = out.strip().decode("ascii") - except OSError: - sha = "unknown" - - return sha - - -def get_hash(): - if os.path.exists(".git"): - sha = get_git_hash()[:7] - else: - sha = "unknown" - - return sha - - -def write_version_py(): - content = """# GENERATED VERSION FILE -# TIME: {} -__version__ = '{}' -__gitsha__ = '{}' -version_info = ({}) -""" - sha = get_hash() - with open("VERSION", "r") as f: - SHORT_VERSION = f.read().strip() - VERSION_INFO = ", ".join( - [x if x.isdigit() else f'"{x}"' for x in SHORT_VERSION.split(".")] - ) - - version_file_str = content.format(time.asctime(), SHORT_VERSION, sha, VERSION_INFO) - with open(version_file, "w") as f: - f.write(version_file_str) - - -def get_version(): - with open(version_file, "r") as f: - exec(compile(f.read(), version_file, "exec")) - return locals()["__version__"] - - -def get_requirements(filename="requirements.txt"): - here = os.path.dirname(os.path.realpath(__file__)) - with open(os.path.join(here, filename), "r") as f: - requires = [line.replace("\n", "") for line in f.readlines()] - return requires - - -if __name__ == "__main__": - write_version_py() - setup( - name="realesrgan", - version=get_version(), - description="Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration", - long_description=readme(), - long_description_content_type="text/markdown", - author="Xintao Wang", - author_email="xintao.wang@outlook.com", - keywords="computer vision, pytorch, image restoration, super-resolution, esrgan, real-esrgan", - url="https://github.com/xinntao/Real-ESRGAN", - include_package_data=True, - packages=find_packages( - exclude=( - "options", - "datasets", - "experiments", - "results", - "tb_logger", - "wandb", - ) - ), - classifiers=[ - "Development Status :: 4 - Beta", - "License :: OSI Approved :: Apache Software License", - "Operating System :: OS Independent", - "Programming Language :: Python :: 3", - "Programming Language :: Python :: 3.7", - "Programming Language :: Python :: 3.8", - ], - license="BSD-3-Clause License", - setup_requires=["cython", "numpy"], - install_requires=get_requirements(), - zip_safe=False, - ) diff --git a/spaces/yangogo/bingo/src/components/providers.tsx b/spaces/yangogo/bingo/src/components/providers.tsx deleted file mode 100644 index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000 --- a/spaces/yangogo/bingo/src/components/providers.tsx +++ /dev/null @@ -1,15 +0,0 @@ -'use client' - -import * as React from 'react' -import { ThemeProvider as NextThemesProvider } from 'next-themes' -import { ThemeProviderProps } from 'next-themes/dist/types' - -import { TooltipProvider } from '@/components/ui/tooltip' - -export function Providers({ children, ...props }: ThemeProviderProps) { - return ( - - {children} - - ) -} diff --git a/spaces/ybelkada/interfacegan_pp/utils/image_manip.py b/spaces/ybelkada/interfacegan_pp/utils/image_manip.py deleted file mode 100644 index c58671850aaf87e1666418ae3f03da81cf6c7caa..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/utils/image_manip.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import PIL - -def concat_images(generated_images, modified_generated_images): - """Shows images in one figure.""" - concatenated_array_genenerated_images = np.concatenate([np.array(image) for image in generated_images], axis=1) - concatenated_array_modified_generated_images = np.concatenate([np.array(image) for image in modified_generated_images], axis=1) - - return [PIL.Image.fromarray(concatenated_array_genenerated_images), PIL.Image.fromarray(concatenated_array_modified_generated_images)] - -def tensor_to_pil(input_object): - if isinstance(input_object, dict): - im_array = [] - images = input_object['image'] - else: - images = input_object - for _, image in enumerate(images): - im_array.append(PIL.Image.fromarray(image)) - return im_array \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/nougat/image_processing_nougat.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/nougat/image_processing_nougat.py deleted file mode 100644 index 882614059f9df6fbe0a08d6342cdcc1d3025d592..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/nougat/image_processing_nougat.py +++ /dev/null @@ -1,510 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Image processor class for Nougat.""" - -from typing import Dict, List, Optional, Union - -import numpy as np - -from ...image_processing_utils import BaseImageProcessor, BatchFeature, get_size_dict -from ...image_transforms import ( - get_resize_output_image_size, - pad, - resize, - to_channel_dimension_format, - to_pil_image, -) -from ...image_utils import ( - IMAGENET_DEFAULT_MEAN, - IMAGENET_DEFAULT_STD, - ChannelDimension, - ImageInput, - PILImageResampling, - get_image_size, - infer_channel_dimension_format, - is_scaled_image, - make_list_of_images, - to_numpy_array, - valid_images, -) -from ...utils import TensorType, logging -from ...utils.import_utils import is_cv2_available, is_vision_available - - -logger = logging.get_logger(__name__) - - -if is_cv2_available(): - pass - - -if is_vision_available(): - import PIL - - -class NougatImageProcessor(BaseImageProcessor): - r""" - Constructs a Nougat image processor. - - Args: - do_crop_margin (`bool`, *optional*, defaults to `True`): - Whether to crop the image margins. - do_resize (`bool`, *optional*, defaults to `True`): - Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by - `do_resize` in the `preprocess` method. - size (`Dict[str, int]` *optional*, defaults to `{"height": 896, "width": 672}`): - Size of the image after resizing. Can be overridden by `size` in the `preprocess` method. - resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): - Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. - do_thumbnail (`bool`, *optional*, defaults to `True`): - Whether to resize the image using thumbnail method. - do_align_long_axis (`bool`, *optional*, defaults to `False`): - Whether to align the long axis of the image with the long axis of `size` by rotating by 90 degrees. - do_pad (`bool`, *optional*, defaults to `True`): - Whether to pad the images to the largest image size in the batch. - do_rescale (`bool`, *optional*, defaults to `True`): - Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` - parameter in the `preprocess` method. - rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): - Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the - `preprocess` method. - do_normalize (`bool`, *optional*, defaults to `True`): - Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method. - image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`): - Mean to use if normalizing the image. This is a float or list of floats the length of the number of - channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. - image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`): - Image standard deviation. - """ - - model_input_names = ["pixel_values"] - - def __init__( - self, - do_crop_margin: bool = True, - do_resize: bool = True, - size: Dict[str, int] = None, - resample: PILImageResampling = PILImageResampling.BILINEAR, - do_thumbnail: bool = True, - do_align_long_axis: bool = False, - do_pad: bool = True, - do_rescale: bool = True, - rescale_factor: Union[int, float] = 1 / 255, - do_normalize: bool = True, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - **kwargs, - ) -> None: - super().__init__(**kwargs) - - size = size if size is not None else {"height": 896, "width": 672} - size = get_size_dict(size) - - self.do_crop_margin = do_crop_margin - self.do_resize = do_resize - self.size = size - self.resample = resample - self.do_thumbnail = do_thumbnail - self.do_align_long_axis = do_align_long_axis - self.do_pad = do_pad - self.do_rescale = do_rescale - self.rescale_factor = rescale_factor - self.do_normalize = do_normalize - self.image_mean = image_mean if image_mean is not None else IMAGENET_DEFAULT_MEAN - self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD - - def python_find_non_zero(self, image: np.array): - """This is a reimplementation of a findNonZero function equivalent to cv2.""" - non_zero_indices = np.column_stack(np.nonzero(image)) - idxvec = non_zero_indices[:, [1, 0]] - idxvec = idxvec.reshape(-1, 1, 2) - return idxvec - - def python_bounding_rect(self, coordinates): - """This is a reimplementation of a BoundingRect function equivalent to cv2.""" - min_values = np.min(coordinates, axis=(0, 1)).astype(int) - max_values = np.max(coordinates, axis=(0, 1)).astype(int) - x_min, y_min = min_values[0], min_values[1] - width = max_values[0] - x_min + 1 - height = max_values[1] - y_min + 1 - return x_min, y_min, width, height - - def crop_margin( - self, - image: np.array, - gray_threshold: int = 200, - data_format: Optional[ChannelDimension] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ) -> np.array: - """ - Crops the margin of the image. Gray pixels are considered margin (i.e., pixels with a value below the - threshold). - - Args: - image (`np.array`): - The image to be cropped. - gray_threshold (`int`, *optional*, defaults to `200`) - Value below which pixels are considered to be gray. - data_format (`ChannelDimension`, *optional*): - The channel dimension format of the output image. If unset, will use the inferred format from the - input. - input_data_format (`ChannelDimension`, *optional*): - The channel dimension format of the input image. If unset, will use the inferred format from the input. - """ - if input_data_format is None: - input_data_format = infer_channel_dimension_format(image) - - image = to_pil_image(image, input_data_format=input_data_format) - data = np.array(image.convert("L")).astype(np.uint8) - max_val = data.max() - min_val = data.min() - if max_val == min_val: - image = np.array(image) - image = ( - to_channel_dimension_format(image, data_format, input_data_format) - if data_format is not None - else image - ) - return image - data = (data - min_val) / (max_val - min_val) * 255 - gray = data < gray_threshold - coords = self.python_find_non_zero(gray) - x_min, y_min, width, height = self.python_bounding_rect(coords) - image = image.crop((x_min, y_min, x_min + width, y_min + height)) - image = np.array(image).astype(np.uint8) - image = to_channel_dimension_format(image, input_data_format, ChannelDimension.LAST) - - image = ( - to_channel_dimension_format(image, data_format, input_data_format) if data_format is not None else image - ) - - return image - - # Copied from transformers.models.donut.image_processing_donut.DonutImageProcessor.align_long_axis - def align_long_axis( - self, - image: np.ndarray, - size: Dict[str, int], - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ) -> np.ndarray: - """ - Align the long axis of the image to the longest axis of the specified size. - - Args: - image (`np.ndarray`): - The image to be aligned. - size (`Dict[str, int]`): - The size `{"height": h, "width": w}` to align the long axis to. - data_format (`str` or `ChannelDimension`, *optional*): - The data format of the output image. If unset, the same format as the input image is used. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format of the input image. If not provided, it will be inferred. - - Returns: - `np.ndarray`: The aligned image. - """ - input_height, input_width = get_image_size(image, channel_dim=input_data_format) - output_height, output_width = size["height"], size["width"] - - if (output_width < output_height and input_width > input_height) or ( - output_width > output_height and input_width < input_height - ): - image = np.rot90(image, 3) - - if data_format is not None: - image = to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) - - return image - - def pad_image( - self, - image: np.ndarray, - size: Dict[str, int], - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - ) -> np.ndarray: - """ - Pad the image to the specified size at the top, bottom, left and right. - - Args: - image (`np.ndarray`): - The image to be padded. - size (`Dict[str, int]`): - The size `{"height": h, "width": w}` to pad the image to. - data_format (`str` or `ChannelDimension`, *optional*): - The data format of the output image. If unset, the same format as the input image is used. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format of the input image. If not provided, it will be inferred. - """ - output_height, output_width = size["height"], size["width"] - input_height, input_width = get_image_size(image, channel_dim=input_data_format) - - delta_width = output_width - input_width - delta_height = output_height - input_height - - pad_top = delta_height // 2 - pad_left = delta_width // 2 - - pad_bottom = delta_height - pad_top - pad_right = delta_width - pad_left - - padding = ((pad_top, pad_bottom), (pad_left, pad_right)) - return pad(image, padding, data_format=data_format, input_data_format=input_data_format) - - # Copied from transformers.models.donut.image_processing_donut.DonutImageProcessor.thumbnail - def thumbnail( - self, - image: np.ndarray, - size: Dict[str, int], - resample: PILImageResampling = PILImageResampling.BICUBIC, - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> np.ndarray: - """ - Resize the image to make a thumbnail. The image is resized so that no dimension is larger than any - corresponding dimension of the specified size. - - Args: - image (`np.ndarray`): - The image to be resized. - size (`Dict[str, int]`): - The size `{"height": h, "width": w}` to resize the image to. - resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): - The resampling filter to use. - data_format (`Optional[Union[str, ChannelDimension]]`, *optional*): - The data format of the output image. If unset, the same format as the input image is used. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format of the input image. If not provided, it will be inferred. - """ - input_height, input_width = get_image_size(image, channel_dim=input_data_format) - output_height, output_width = size["height"], size["width"] - - # We always resize to the smallest of either the input or output size. - height = min(input_height, output_height) - width = min(input_width, output_width) - - if height == input_height and width == input_width: - return image - - if input_height > input_width: - width = int(input_width * height / input_height) - elif input_width > input_height: - height = int(input_height * width / input_width) - - return resize( - image, - size=(height, width), - resample=resample, - reducing_gap=2.0, - data_format=data_format, - input_data_format=input_data_format, - **kwargs, - ) - - # Copied from transformers.models.donut.image_processing_donut.DonutImageProcessor.resize - def resize( - self, - image: np.ndarray, - size: Dict[str, int], - resample: PILImageResampling = PILImageResampling.BICUBIC, - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> np.ndarray: - """ - Resizes `image` to `(height, width)` specified by `size` using the PIL library. - - Args: - image (`np.ndarray`): - Image to resize. - size (`Dict[str, int]`): - Size of the output image. - resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): - Resampling filter to use when resiizing the image. - data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format of the image. If not provided, it will be the same as the input image. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format of the input image. If not provided, it will be inferred. - """ - size = get_size_dict(size) - shortest_edge = min(size["height"], size["width"]) - output_size = get_resize_output_image_size( - image, size=shortest_edge, default_to_square=False, input_data_format=input_data_format - ) - resized_image = resize( - image, - size=output_size, - resample=resample, - data_format=data_format, - input_data_format=input_data_format, - **kwargs, - ) - return resized_image - - def preprocess( - self, - images: ImageInput, - do_crop_margin: bool = None, - do_resize: bool = None, - size: Dict[str, int] = None, - resample: PILImageResampling = None, - do_thumbnail: bool = None, - do_align_long_axis: bool = None, - do_pad: bool = None, - do_rescale: bool = None, - rescale_factor: Union[int, float] = None, - do_normalize: bool = None, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - data_format: Optional[ChannelDimension] = ChannelDimension.FIRST, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> PIL.Image.Image: - """ - Preprocess an image or batch of images. - - Args: - images (`ImageInput`): - Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. - do_crop_margin (`bool`, *optional*, defaults to `self.do_crop_margin`): - Whether to crop the image margins. - do_resize (`bool`, *optional*, defaults to `self.do_resize`): - Whether to resize the image. - size (`Dict[str, int]`, *optional*, defaults to `self.size`): - Size of the image after resizing. Shortest edge of the image is resized to min(size["height"], - size["width"]) with the longest edge resized to keep the input aspect ratio. - resample (`int`, *optional*, defaults to `self.resample`): - Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only - has an effect if `do_resize` is set to `True`. - do_thumbnail (`bool`, *optional*, defaults to `self.do_thumbnail`): - Whether to resize the image using thumbnail method. - do_align_long_axis (`bool`, *optional*, defaults to `self.do_align_long_axis`): - Whether to align the long axis of the image with the long axis of `size` by rotating by 90 degrees. - do_pad (`bool`, *optional*, defaults to `self.do_pad`): - Whether to pad the images to the largest image size in the batch. - do_rescale (`bool`, *optional*, defaults to `self.do_rescale`): - Whether to rescale the image by the specified scale `rescale_factor`. - rescale_factor (`int` or `float`, *optional*, defaults to `self.rescale_factor`): - Scale factor to use if rescaling the image. - do_normalize (`bool`, *optional*, defaults to `self.do_normalize`): - Whether to normalize the image. - image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`): - Image mean to use for normalization. - image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`): - Image standard deviation to use for normalization. - return_tensors (`str` or `TensorType`, *optional*): - The type of tensors to return. Can be one of: - - Unset: Return a list of `np.ndarray`. - - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`. - - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`. - - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`. - - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`. - data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`): - The channel dimension format for the output image. Can be one of: - - `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `ChannelDimension.LAST`: image in (height, width, num_channels) format. - - Unset: defaults to the channel dimension format of the input image. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format for the input image. If unset, the channel dimension format is inferred - from the input image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. - """ - do_crop_margin = do_crop_margin if do_crop_margin is not None else self.do_crop_margin - do_resize = do_resize if do_resize is not None else self.do_resize - size = size if size is not None else self.size - resample = resample if resample is not None else self.resample - do_thumbnail = do_thumbnail if do_thumbnail is not None else self.do_thumbnail - do_align_long_axis = do_align_long_axis if do_align_long_axis is not None else self.do_align_long_axis - do_pad = do_pad if do_pad is not None else self.do_pad - do_rescale = do_rescale if do_rescale is not None else self.do_rescale - rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor - do_normalize = do_normalize if do_normalize is not None else self.do_normalize - image_mean = image_mean if image_mean is not None else self.image_mean - image_std = image_std if image_std is not None else self.image_std - - images = make_list_of_images(images) - - if not valid_images(images): - raise ValueError( - "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, " - "torch.Tensor, tf.Tensor or jax.ndarray." - ) - - if do_resize and size is None: - raise ValueError("Size must be specified if do_resize is True.") - - if do_pad and size is None: - raise ValueError("Size must be specified if do_pad is True.") - - if do_rescale and rescale_factor is None: - raise ValueError("Rescale factor must be specified if do_rescale is True.") - - if do_normalize and (image_mean is None or image_std is None): - raise ValueError("Image mean and std must be specified if do_normalize is True.") - - # All transformations expect numpy arrays. - images = [to_numpy_array(image) for image in images] - - if is_scaled_image(images[0]) and do_rescale: - logger.warning_once( - "It looks like you are trying to rescale already rescaled images. If the input" - " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again." - ) - - if input_data_format is None: - # We assume that all images have the same channel dimension format. - input_data_format = infer_channel_dimension_format(images[0]) - - if do_crop_margin: - images = [self.crop_margin(image, input_data_format=input_data_format) for image in images] - - if do_align_long_axis: - images = [self.align_long_axis(image, size=size, input_data_format=input_data_format) for image in images] - - if do_resize: - images = [ - self.resize(image=image, size=size, resample=resample, input_data_format=input_data_format) - for image in images - ] - - if do_thumbnail: - images = [self.thumbnail(image=image, size=size, input_data_format=input_data_format) for image in images] - - if do_pad: - images = [self.pad_image(image=image, size=size, input_data_format=input_data_format) for image in images] - - if do_rescale: - images = [ - self.rescale(image=image, scale=rescale_factor, input_data_format=input_data_format) - for image in images - ] - - if do_normalize: - images = [ - self.normalize(image=image, mean=image_mean, std=image_std, input_data_format=input_data_format) - for image in images - ] - - images = [ - to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images - ] - - data = {"pixel_values": images} - return BatchFeature(data=data, tensor_type=return_tensors) diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vdecoder/nsf_hifigan/nvSTFT.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vdecoder/nsf_hifigan/nvSTFT.py deleted file mode 100644 index 62bd5a008f81929054f036c81955d5d73377f772..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vdecoder/nsf_hifigan/nvSTFT.py +++ /dev/null @@ -1,134 +0,0 @@ -import math -import os -os.environ["LRU_CACHE_CAPACITY"] = "3" -import random -import torch -import torch.utils.data -import numpy as np -import librosa -from librosa.util import normalize -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read -import soundfile as sf -import torch.nn.functional as F - -def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False): - sampling_rate = None - try: - data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile. - except Exception as ex: - print(f"'{full_path}' failed to load.\nException:") - print(ex) - if return_empty_on_exception: - return [], sampling_rate or target_sr or 48000 - else: - raise Exception(ex) - - if len(data.shape) > 1: - data = data[:, 0] - assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension) - - if np.issubdtype(data.dtype, np.integer): # if audio data is type int - max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX - else: # if audio data is type fp32 - max_mag = max(np.amax(data), -np.amin(data)) - max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32 - - data = torch.FloatTensor(data.astype(np.float32))/max_mag - - if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except - return [], sampling_rate or target_sr or 48000 - if target_sr is not None and sampling_rate != target_sr: - data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr)) - sampling_rate = target_sr - - return data, sampling_rate - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - -class STFT(): - def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5): - self.target_sr = sr - - self.n_mels = n_mels - self.n_fft = n_fft - self.win_size = win_size - self.hop_length = hop_length - self.fmin = fmin - self.fmax = fmax - self.clip_val = clip_val - self.mel_basis = {} - self.hann_window = {} - - def get_mel(self, y, keyshift=0, speed=1, center=False): - sampling_rate = self.target_sr - n_mels = self.n_mels - n_fft = self.n_fft - win_size = self.win_size - hop_length = self.hop_length - fmin = self.fmin - fmax = self.fmax - clip_val = self.clip_val - - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(n_fft * factor)) - win_size_new = int(np.round(win_size * factor)) - hop_length_new = int(np.round(hop_length * speed)) - - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - mel_basis_key = str(fmax)+'_'+str(y.device) - if mel_basis_key not in self.mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax) - self.mel_basis[mel_basis_key] = torch.from_numpy(mel).float().to(y.device) - - keyshift_key = str(keyshift)+'_'+str(y.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_size_new).to(y.device) - - pad_left = (win_size_new - hop_length_new) //2 - pad_right = max((win_size_new- hop_length_new + 1) //2, win_size_new - y.size(-1) - pad_left) - if pad_right < y.size(-1): - mode = 'reflect' - else: - mode = 'constant' - y = torch.nn.functional.pad(y.unsqueeze(1), (pad_left, pad_right), mode = mode) - y = y.squeeze(1) - - spec = torch.stft(y, n_fft_new, hop_length=hop_length_new, win_length=win_size_new, window=self.hann_window[keyshift_key], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - # print(111,spec) - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - if keyshift != 0: - size = n_fft // 2 + 1 - resize = spec.size(1) - if resize < size: - spec = F.pad(spec, (0, 0, 0, size-resize)) - spec = spec[:, :size, :] * win_size / win_size_new - - # print(222,spec) - spec = torch.matmul(self.mel_basis[mel_basis_key], spec) - # print(333,spec) - spec = dynamic_range_compression_torch(spec, clip_val=clip_val) - # print(444,spec) - return spec - - def __call__(self, audiopath): - audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr) - spect = self.get_mel(audio.unsqueeze(0)).squeeze(0) - return spect - -stft = STFT() diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/cascade_rcnn.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/cascade_rcnn.py deleted file mode 100644 index a0ca70fe23a1d406ee9bed6204a987d7e0708b91..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/cascade_rcnn.py +++ /dev/null @@ -1,299 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import List -import torch -from torch import nn -from torch.autograd.function import Function - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec -from detectron2.structures import Boxes, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage - -from ..box_regression import Box2BoxTransform -from ..matcher import Matcher -from ..poolers import ROIPooler -from .box_head import build_box_head -from .fast_rcnn import FastRCNNOutputLayers, fast_rcnn_inference -from .roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads - - -class _ScaleGradient(Function): - @staticmethod - def forward(ctx, input, scale): - ctx.scale = scale - return input - - @staticmethod - def backward(ctx, grad_output): - return grad_output * ctx.scale, None - - -@ROI_HEADS_REGISTRY.register() -class CascadeROIHeads(StandardROIHeads): - """ - The ROI heads that implement :paper:`Cascade R-CNN`. - """ - - @configurable - def __init__( - self, - *, - box_in_features: List[str], - box_pooler: ROIPooler, - box_heads: List[nn.Module], - box_predictors: List[nn.Module], - proposal_matchers: List[Matcher], - **kwargs, - ): - """ - NOTE: this interface is experimental. - - Args: - box_pooler (ROIPooler): pooler that extracts region features from given boxes - box_heads (list[nn.Module]): box head for each cascade stage - box_predictors (list[nn.Module]): box predictor for each cascade stage - proposal_matchers (list[Matcher]): matcher with different IoU thresholds to - match boxes with ground truth for each stage. The first matcher matches - RPN proposals with ground truth, the other matchers use boxes predicted - by the previous stage as proposals and match them with ground truth. - """ - assert "proposal_matcher" not in kwargs, ( - "CascadeROIHeads takes 'proposal_matchers=' for each stage instead " - "of one 'proposal_matcher='." - ) - # The first matcher matches RPN proposals with ground truth, done in the base class - kwargs["proposal_matcher"] = proposal_matchers[0] - num_stages = self.num_cascade_stages = len(box_heads) - box_heads = nn.ModuleList(box_heads) - box_predictors = nn.ModuleList(box_predictors) - assert len(box_predictors) == num_stages, f"{len(box_predictors)} != {num_stages}!" - assert len(proposal_matchers) == num_stages, f"{len(proposal_matchers)} != {num_stages}!" - super().__init__( - box_in_features=box_in_features, - box_pooler=box_pooler, - box_head=box_heads, - box_predictor=box_predictors, - **kwargs, - ) - self.proposal_matchers = proposal_matchers - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - ret.pop("proposal_matcher") - return ret - - @classmethod - def _init_box_head(cls, cfg, input_shape): - # fmt: off - in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES - pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION - pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) - sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO - pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE - cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS - cascade_ious = cfg.MODEL.ROI_BOX_CASCADE_HEAD.IOUS - assert len(cascade_bbox_reg_weights) == len(cascade_ious) - assert cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, \ - "CascadeROIHeads only support class-agnostic regression now!" - assert cascade_ious[0] == cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS[0] - # fmt: on - - in_channels = [input_shape[f].channels for f in in_features] - # Check all channel counts are equal - assert len(set(in_channels)) == 1, in_channels - in_channels = in_channels[0] - - box_pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - pooler_type=pooler_type, - ) - pooled_shape = ShapeSpec( - channels=in_channels, width=pooler_resolution, height=pooler_resolution - ) - - box_heads, box_predictors, proposal_matchers = [], [], [] - for match_iou, bbox_reg_weights in zip(cascade_ious, cascade_bbox_reg_weights): - box_head = build_box_head(cfg, pooled_shape) - box_heads.append(box_head) - box_predictors.append( - FastRCNNOutputLayers( - cfg, - box_head.output_shape, - box2box_transform=Box2BoxTransform(weights=bbox_reg_weights), - ) - ) - proposal_matchers.append(Matcher([match_iou], [0, 1], allow_low_quality_matches=False)) - return { - "box_in_features": in_features, - "box_pooler": box_pooler, - "box_heads": box_heads, - "box_predictors": box_predictors, - "proposal_matchers": proposal_matchers, - } - - def forward(self, images, features, proposals, targets=None): - del images - if self.training: - proposals = self.label_and_sample_proposals(proposals, targets) - - if self.training: - # Need targets to box head - losses = self._forward_box(features, proposals, targets) - losses.update(self._forward_mask(features, proposals)) - losses.update(self._forward_keypoint(features, proposals)) - return proposals, losses - else: - pred_instances = self._forward_box(features, proposals) - pred_instances = self.forward_with_given_boxes(features, pred_instances) - return pred_instances, {} - - def _forward_box(self, features, proposals, targets=None): - """ - Args: - features, targets: the same as in - Same as in :meth:`ROIHeads.forward`. - proposals (list[Instances]): the per-image object proposals with - their matching ground truth. - Each has fields "proposal_boxes", and "objectness_logits", - "gt_classes", "gt_boxes". - """ - features = [features[f] for f in self.box_in_features] - head_outputs = [] # (predictor, predictions, proposals) - prev_pred_boxes = None - image_sizes = [x.image_size for x in proposals] - for k in range(self.num_cascade_stages): - if k > 0: - # The output boxes of the previous stage are used to create the input - # proposals of the next stage. - proposals = self._create_proposals_from_boxes(prev_pred_boxes, image_sizes) - if self.training: - proposals = self._match_and_label_boxes(proposals, k, targets) - predictions = self._run_stage(features, proposals, k) - prev_pred_boxes = self.box_predictor[k].predict_boxes(predictions, proposals) - head_outputs.append((self.box_predictor[k], predictions, proposals)) - - if self.training: - losses = {} - storage = get_event_storage() - for stage, (predictor, predictions, proposals) in enumerate(head_outputs): - with storage.name_scope("stage{}".format(stage)): - stage_losses = predictor.losses(predictions, proposals) - losses.update({k + "_stage{}".format(stage): v for k, v in stage_losses.items()}) - return losses - else: - # Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1) - scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs] - - # Average the scores across heads - scores = [ - sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages) - for scores_per_image in zip(*scores_per_stage) - ] - # Use the boxes of the last head - predictor, predictions, proposals = head_outputs[-1] - boxes = predictor.predict_boxes(predictions, proposals) - pred_instances, _ = fast_rcnn_inference( - boxes, - scores, - image_sizes, - predictor.test_score_thresh, - predictor.test_nms_thresh, - predictor.test_topk_per_image, - ) - return pred_instances - - @torch.no_grad() - def _match_and_label_boxes(self, proposals, stage, targets): - """ - Match proposals with groundtruth using the matcher at the given stage. - Label the proposals as foreground or background based on the match. - - Args: - proposals (list[Instances]): One Instances for each image, with - the field "proposal_boxes". - stage (int): the current stage - targets (list[Instances]): the ground truth instances - - Returns: - list[Instances]: the same proposals, but with fields "gt_classes" and "gt_boxes" - """ - num_fg_samples, num_bg_samples = [], [] - for proposals_per_image, targets_per_image in zip(proposals, targets): - match_quality_matrix = pairwise_iou( - targets_per_image.gt_boxes, proposals_per_image.proposal_boxes - ) - # proposal_labels are 0 or 1 - matched_idxs, proposal_labels = self.proposal_matchers[stage](match_quality_matrix) - if len(targets_per_image) > 0: - gt_classes = targets_per_image.gt_classes[matched_idxs] - # Label unmatched proposals (0 label from matcher) as background (label=num_classes) - gt_classes[proposal_labels == 0] = self.num_classes - gt_boxes = targets_per_image.gt_boxes[matched_idxs] - else: - gt_classes = torch.zeros_like(matched_idxs) + self.num_classes - gt_boxes = Boxes( - targets_per_image.gt_boxes.tensor.new_zeros((len(proposals_per_image), 4)) - ) - proposals_per_image.gt_classes = gt_classes - proposals_per_image.gt_boxes = gt_boxes - - num_fg_samples.append((proposal_labels == 1).sum().item()) - num_bg_samples.append(proposal_labels.numel() - num_fg_samples[-1]) - - # Log the number of fg/bg samples in each stage - storage = get_event_storage() - storage.put_scalar( - "stage{}/roi_head/num_fg_samples".format(stage), - sum(num_fg_samples) / len(num_fg_samples), - ) - storage.put_scalar( - "stage{}/roi_head/num_bg_samples".format(stage), - sum(num_bg_samples) / len(num_bg_samples), - ) - return proposals - - def _run_stage(self, features, proposals, stage): - """ - Args: - features (list[Tensor]): #lvl input features to ROIHeads - proposals (list[Instances]): #image Instances, with the field "proposal_boxes" - stage (int): the current stage - - Returns: - Same output as `FastRCNNOutputLayers.forward()`. - """ - box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals]) - # The original implementation averages the losses among heads, - # but scale up the parameter gradients of the heads. - # This is equivalent to adding the losses among heads, - # but scale down the gradients on features. - if self.training: - box_features = _ScaleGradient.apply(box_features, 1.0 / self.num_cascade_stages) - box_features = self.box_head[stage](box_features) - return self.box_predictor[stage](box_features) - - def _create_proposals_from_boxes(self, boxes, image_sizes): - """ - Args: - boxes (list[Tensor]): per-image predicted boxes, each of shape Ri x 4 - image_sizes (list[tuple]): list of image shapes in (h, w) - - Returns: - list[Instances]: per-image proposals with the given boxes. - """ - # Just like RPN, the proposals should not have gradients - boxes = [Boxes(b.detach()) for b in boxes] - proposals = [] - for boxes_per_image, image_size in zip(boxes, image_sizes): - boxes_per_image.clip(image_size) - if self.training: - # do not filter empty boxes at inference time, - # because the scores from each stage need to be aligned and added later - boxes_per_image = boxes_per_image[boxes_per_image.nonempty()] - prop = Instances(image_size) - prop.proposal_boxes = boxes_per_image - proposals.append(prop) - return proposals diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/overscroll-behavior.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/overscroll-behavior.js deleted file mode 100644 index 0a09f1ece7003b472c4f9ee8e2380dbe3f788207..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/overscroll-behavior.js +++ /dev/null @@ -1,33 +0,0 @@ -let Declaration = require('../declaration') - -class OverscrollBehavior extends Declaration { - /** - * Change property name for IE - */ - prefixed(prop, prefix) { - return prefix + 'scroll-chaining' - } - - /** - * Return property name by spec - */ - normalize() { - return 'overscroll-behavior' - } - - /** - * Change value for IE - */ - set(decl, prefix) { - if (decl.value === 'auto') { - decl.value = 'chained' - } else if (decl.value === 'none' || decl.value === 'contain') { - decl.value = 'none' - } - return super.set(decl, prefix) - } -} - -OverscrollBehavior.names = ['overscroll-behavior', 'scroll-chaining'] - -module.exports = OverscrollBehavior diff --git a/spaces/zhang-wei-jian/docker/node_modules/tsscmp/test/unit/index.js b/spaces/zhang-wei-jian/docker/node_modules/tsscmp/test/unit/index.js deleted file mode 100644 index 03354234a34220aac40dba7912bcd6d5bcc2d2b0..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/tsscmp/test/unit/index.js +++ /dev/null @@ -1,69 +0,0 @@ -'use strict'; - -var assert = require('assert'); -var timeSafeCompare = require('../../lib/index'); - -process.on('error', function (e) { - console.log('caught: ' + e); -}); - -function testEqual(a, b) { - assert(timeSafeCompare(a, b)); - - // lets also do a parity check with the strict equal to operator - assert(a === b); -} - -function testNotEqual(a, b) { - assert(!timeSafeCompare(a, b)); - - // lets also do a parity check with the strict not equal to operator - assert(a !== b); -} - -// note: lets also make sure tsscmp can be inline replaced for any types - -// just incase if anyone is interested - -// positive tests -testEqual('127e6fbfe24a750e72930c220a8e138275656b8e5d8f48a98c3c92df2caba935', - '127e6fbfe24a750e72930c220a8e138275656b8e5d8f48a98c3c92df2caba935', - 'test '); -testEqual('a', 'a'); -testEqual('', ''); -testEqual(undefined, undefined); -testEqual(true, true); -testEqual(false, false); -(function () { - var a = { a: 1 }; - testEqual(a, a); -})(); -(function () { - function f1() { return 1; }; - testEqual(f1, f1); -})(); - -// negative tests -testNotEqual(''); -testNotEqual('a', 'b'); -testNotEqual('a', 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'); -testNotEqual('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', 'a'); -testNotEqual('alpha', 'beta'); -testNotEqual(false, true); -testNotEqual(false, undefined); -testNotEqual(function () { }, function () { }); -testNotEqual({}, {}); -testNotEqual({ a: 1 }, { a: 1 }); -testNotEqual({ a: 1 }, { a: 2 }); -testNotEqual([1, 2], [1, 2]); -testNotEqual([1, 2], [1, 2, 3]); -(function () { - var a = { p: 1 }; - var b = { p: 1 }; - testNotEqual(a, b); -})(); -(function () { - function f1() { return 1; }; - function f2() { return 1; }; - testNotEqual(f1, f2); -})(); -console.log('Success: all tests complete.'); diff --git a/spaces/zhang-wei-jian/docker/node_modules/type-is/HISTORY.md b/spaces/zhang-wei-jian/docker/node_modules/type-is/HISTORY.md deleted file mode 100644 index 8de21f7ae6f8de94d8c8b00fbf8c3017247077ff..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/type-is/HISTORY.md +++ /dev/null @@ -1,259 +0,0 @@ -1.6.18 / 2019-04-26 -=================== - - * Fix regression passing request object to `typeis.is` - -1.6.17 / 2019-04-25 -=================== - - * deps: mime-types@~2.1.24 - - Add Apple file extensions from IANA - - Add extension `.csl` to `application/vnd.citationstyles.style+xml` - - Add extension `.es` to `application/ecmascript` - - Add extension `.nq` to `application/n-quads` - - Add extension `.nt` to `application/n-triples` - - Add extension `.owl` to `application/rdf+xml` - - Add extensions `.siv` and `.sieve` to `application/sieve` - - Add extensions from IANA for `image/*` types - - Add extensions from IANA for `model/*` types - - Add extensions to HEIC image types - - Add new mime types - - Add `text/mdx` with extension `.mdx` - * perf: prevent internal `throw` on invalid type - -1.6.16 / 2018-02-16 -=================== - - * deps: mime-types@~2.1.18 - - Add `application/raml+yaml` with extension `.raml` - - Add `application/wasm` with extension `.wasm` - - Add `text/shex` with extension `.shex` - - Add extensions for JPEG-2000 images - - Add extensions from IANA for `message/*` types - - Add extension `.mjs` to `application/javascript` - - Add extension `.wadl` to `application/vnd.sun.wadl+xml` - - Add extension `.gz` to `application/gzip` - - Add glTF types and extensions - - Add new mime types - - Update extensions `.md` and `.markdown` to be `text/markdown` - - Update font MIME types - - Update `text/hjson` to registered `application/hjson` - -1.6.15 / 2017-03-31 -=================== - - * deps: mime-types@~2.1.15 - - Add new mime types - -1.6.14 / 2016-11-18 -=================== - - * deps: mime-types@~2.1.13 - - Add new mime types - -1.6.13 / 2016-05-18 -=================== - - * deps: mime-types@~2.1.11 - - Add new mime types - -1.6.12 / 2016-02-28 -=================== - - * deps: mime-types@~2.1.10 - - Add new mime types - - Fix extension of `application/dash+xml` - - Update primary extension for `audio/mp4` - -1.6.11 / 2016-01-29 -=================== - - * deps: mime-types@~2.1.9 - - Add new mime types - -1.6.10 / 2015-12-01 -=================== - - * deps: mime-types@~2.1.8 - - Add new mime types - -1.6.9 / 2015-09-27 -================== - - * deps: mime-types@~2.1.7 - - Add new mime types - -1.6.8 / 2015-09-04 -================== - - * deps: mime-types@~2.1.6 - - Add new mime types - -1.6.7 / 2015-08-20 -================== - - * Fix type error when given invalid type to match against - * deps: mime-types@~2.1.5 - - Add new mime types - -1.6.6 / 2015-07-31 -================== - - * deps: mime-types@~2.1.4 - - Add new mime types - -1.6.5 / 2015-07-16 -================== - - * deps: mime-types@~2.1.3 - - Add new mime types - -1.6.4 / 2015-07-01 -================== - - * deps: mime-types@~2.1.2 - - Add new mime types - * perf: enable strict mode - * perf: remove argument reassignment - -1.6.3 / 2015-06-08 -================== - - * deps: mime-types@~2.1.1 - - Add new mime types - * perf: reduce try block size - * perf: remove bitwise operations - -1.6.2 / 2015-05-10 -================== - - * deps: mime-types@~2.0.11 - - Add new mime types - -1.6.1 / 2015-03-13 -================== - - * deps: mime-types@~2.0.10 - - Add new mime types - -1.6.0 / 2015-02-12 -================== - - * fix false-positives in `hasBody` `Transfer-Encoding` check - * support wildcard for both type and subtype (`*/*`) - -1.5.7 / 2015-02-09 -================== - - * fix argument reassignment - * deps: mime-types@~2.0.9 - - Add new mime types - -1.5.6 / 2015-01-29 -================== - - * deps: mime-types@~2.0.8 - - Add new mime types - -1.5.5 / 2014-12-30 -================== - - * deps: mime-types@~2.0.7 - - Add new mime types - - Fix missing extensions - - Fix various invalid MIME type entries - - Remove example template MIME types - - deps: mime-db@~1.5.0 - -1.5.4 / 2014-12-10 -================== - - * deps: mime-types@~2.0.4 - - Add new mime types - - deps: mime-db@~1.3.0 - -1.5.3 / 2014-11-09 -================== - - * deps: mime-types@~2.0.3 - - Add new mime types - - deps: mime-db@~1.2.0 - -1.5.2 / 2014-09-28 -================== - - * deps: mime-types@~2.0.2 - - Add new mime types - - deps: mime-db@~1.1.0 - -1.5.1 / 2014-09-07 -================== - - * Support Node.js 0.6 - * deps: media-typer@0.3.0 - * deps: mime-types@~2.0.1 - - Support Node.js 0.6 - -1.5.0 / 2014-09-05 -================== - - * fix `hasbody` to be true for `content-length: 0` - -1.4.0 / 2014-09-02 -================== - - * update mime-types - -1.3.2 / 2014-06-24 -================== - - * use `~` range on mime-types - -1.3.1 / 2014-06-19 -================== - - * fix global variable leak - -1.3.0 / 2014-06-19 -================== - - * improve type parsing - - - invalid media type never matches - - media type not case-sensitive - - extra LWS does not affect results - -1.2.2 / 2014-06-19 -================== - - * fix behavior on unknown type argument - -1.2.1 / 2014-06-03 -================== - - * switch dependency from `mime` to `mime-types@1.0.0` - -1.2.0 / 2014-05-11 -================== - - * support suffix matching: - - - `+json` matches `application/vnd+json` - - `*/vnd+json` matches `application/vnd+json` - - `application/*+json` matches `application/vnd+json` - -1.1.0 / 2014-04-12 -================== - - * add non-array values support - * expose internal utilities: - - - `.is()` - - `.hasBody()` - - `.normalize()` - - `.match()` - -1.0.1 / 2014-03-30 -================== - - * add `multipart` as a shorthand diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/dataset/__init__.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/dataset/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zhoupin30/zhoupin30/src/lib/bots/bing/sr.ts b/spaces/zhoupin30/zhoupin30/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/zomehwh/sovits-rudolf/modules/crepe.py b/spaces/zomehwh/sovits-rudolf/modules/crepe.py deleted file mode 100644 index 0bff0e3474de6483290b56993f9b845e91ef9702..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-rudolf/modules/crepe.py +++ /dev/null @@ -1,327 +0,0 @@ -from typing import Optional,Union -try: - from typing import Literal -except Exception as e: - from typing_extensions import Literal -import numpy as np -import torch -import torchcrepe -from torch import nn -from torch.nn import functional as F -import scipy - -#from:https://github.com/fishaudio/fish-diffusion - -def repeat_expand( - content: Union[torch.Tensor, np.ndarray], target_len: int, mode: str = "nearest" -): - """Repeat content to target length. - This is a wrapper of torch.nn.functional.interpolate. - - Args: - content (torch.Tensor): tensor - target_len (int): target length - mode (str, optional): interpolation mode. Defaults to "nearest". - - Returns: - torch.Tensor: tensor - """ - - ndim = content.ndim - - if content.ndim == 1: - content = content[None, None] - elif content.ndim == 2: - content = content[None] - - assert content.ndim == 3 - - is_np = isinstance(content, np.ndarray) - if is_np: - content = torch.from_numpy(content) - - results = torch.nn.functional.interpolate(content, size=target_len, mode=mode) - - if is_np: - results = results.numpy() - - if ndim == 1: - return results[0, 0] - elif ndim == 2: - return results[0] - - -class BasePitchExtractor: - def __init__( - self, - hop_length: int = 512, - f0_min: float = 50.0, - f0_max: float = 1100.0, - keep_zeros: bool = True, - ): - """Base pitch extractor. - - Args: - hop_length (int, optional): Hop length. Defaults to 512. - f0_min (float, optional): Minimum f0. Defaults to 50.0. - f0_max (float, optional): Maximum f0. Defaults to 1100.0. - keep_zeros (bool, optional): Whether keep zeros in pitch. Defaults to True. - """ - - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.keep_zeros = keep_zeros - - def __call__(self, x, sampling_rate=44100, pad_to=None): - raise NotImplementedError("BasePitchExtractor is not callable.") - - def post_process(self, x, sampling_rate, f0, pad_to): - if isinstance(f0, np.ndarray): - f0 = torch.from_numpy(f0).float().to(x.device) - - if pad_to is None: - return f0 - - f0 = repeat_expand(f0, pad_to) - - if self.keep_zeros: - return f0 - - vuv_vector = torch.zeros_like(f0) - vuv_vector[f0 > 0.0] = 1.0 - vuv_vector[f0 <= 0.0] = 0.0 - - # 去掉0频率, 并线性插值 - nzindex = torch.nonzero(f0).squeeze() - f0 = torch.index_select(f0, dim=0, index=nzindex).cpu().numpy() - time_org = self.hop_length / sampling_rate * nzindex.cpu().numpy() - time_frame = np.arange(pad_to) * self.hop_length / sampling_rate - - if f0.shape[0] <= 0: - return torch.zeros(pad_to, dtype=torch.float, device=x.device),torch.zeros(pad_to, dtype=torch.float, device=x.device) - - if f0.shape[0] == 1: - return torch.ones(pad_to, dtype=torch.float, device=x.device) * f0[0],torch.ones(pad_to, dtype=torch.float, device=x.device) - - # 大概可以用 torch 重写? - f0 = np.interp(time_frame, time_org, f0, left=f0[0], right=f0[-1]) - vuv_vector = vuv_vector.cpu().numpy() - vuv_vector = np.ceil(scipy.ndimage.zoom(vuv_vector,pad_to/len(vuv_vector),order = 0)) - - return f0,vuv_vector - - -class MaskedAvgPool1d(nn.Module): - def __init__( - self, kernel_size: int, stride: Optional[int] = None, padding: Optional[int] = 0 - ): - """An implementation of mean pooling that supports masked values. - - Args: - kernel_size (int): The size of the median pooling window. - stride (int, optional): The stride of the median pooling window. Defaults to None. - padding (int, optional): The padding of the median pooling window. Defaults to 0. - """ - - super(MaskedAvgPool1d, self).__init__() - self.kernel_size = kernel_size - self.stride = stride or kernel_size - self.padding = padding - - def forward(self, x, mask=None): - ndim = x.dim() - if ndim == 2: - x = x.unsqueeze(1) - - assert ( - x.dim() == 3 - ), "Input tensor must have 2 or 3 dimensions (batch_size, channels, width)" - - # Apply the mask by setting masked elements to zero, or make NaNs zero - if mask is None: - mask = ~torch.isnan(x) - - # Ensure mask has the same shape as the input tensor - assert x.shape == mask.shape, "Input tensor and mask must have the same shape" - - masked_x = torch.where(mask, x, torch.zeros_like(x)) - # Create a ones kernel with the same number of channels as the input tensor - ones_kernel = torch.ones(x.size(1), 1, self.kernel_size, device=x.device) - - # Perform sum pooling - sum_pooled = nn.functional.conv1d( - masked_x, - ones_kernel, - stride=self.stride, - padding=self.padding, - groups=x.size(1), - ) - - # Count the non-masked (valid) elements in each pooling window - valid_count = nn.functional.conv1d( - mask.float(), - ones_kernel, - stride=self.stride, - padding=self.padding, - groups=x.size(1), - ) - valid_count = valid_count.clamp(min=1) # Avoid division by zero - - # Perform masked average pooling - avg_pooled = sum_pooled / valid_count - - # Fill zero values with NaNs - avg_pooled[avg_pooled == 0] = float("nan") - - if ndim == 2: - return avg_pooled.squeeze(1) - - return avg_pooled - - -class MaskedMedianPool1d(nn.Module): - def __init__( - self, kernel_size: int, stride: Optional[int] = None, padding: Optional[int] = 0 - ): - """An implementation of median pooling that supports masked values. - - This implementation is inspired by the median pooling implementation in - https://gist.github.com/rwightman/f2d3849281624be7c0f11c85c87c1598 - - Args: - kernel_size (int): The size of the median pooling window. - stride (int, optional): The stride of the median pooling window. Defaults to None. - padding (int, optional): The padding of the median pooling window. Defaults to 0. - """ - - super(MaskedMedianPool1d, self).__init__() - self.kernel_size = kernel_size - self.stride = stride or kernel_size - self.padding = padding - - def forward(self, x, mask=None): - ndim = x.dim() - if ndim == 2: - x = x.unsqueeze(1) - - assert ( - x.dim() == 3 - ), "Input tensor must have 2 or 3 dimensions (batch_size, channels, width)" - - if mask is None: - mask = ~torch.isnan(x) - - assert x.shape == mask.shape, "Input tensor and mask must have the same shape" - - masked_x = torch.where(mask, x, torch.zeros_like(x)) - - x = F.pad(masked_x, (self.padding, self.padding), mode="reflect") - mask = F.pad( - mask.float(), (self.padding, self.padding), mode="constant", value=0 - ) - - x = x.unfold(2, self.kernel_size, self.stride) - mask = mask.unfold(2, self.kernel_size, self.stride) - - x = x.contiguous().view(x.size()[:3] + (-1,)) - mask = mask.contiguous().view(mask.size()[:3] + (-1,)).to(x.device) - - # Combine the mask with the input tensor - #x_masked = torch.where(mask.bool(), x, torch.fill_(torch.zeros_like(x),float("inf"))) - x_masked = torch.where(mask.bool(), x, torch.FloatTensor([float("inf")]).to(x.device)) - - # Sort the masked tensor along the last dimension - x_sorted, _ = torch.sort(x_masked, dim=-1) - - # Compute the count of non-masked (valid) values - valid_count = mask.sum(dim=-1) - - # Calculate the index of the median value for each pooling window - median_idx = (torch.div((valid_count - 1), 2, rounding_mode='trunc')).clamp(min=0) - - # Gather the median values using the calculated indices - median_pooled = x_sorted.gather(-1, median_idx.unsqueeze(-1).long()).squeeze(-1) - - # Fill infinite values with NaNs - median_pooled[torch.isinf(median_pooled)] = float("nan") - - if ndim == 2: - return median_pooled.squeeze(1) - - return median_pooled - - -class CrepePitchExtractor(BasePitchExtractor): - def __init__( - self, - hop_length: int = 512, - f0_min: float = 50.0, - f0_max: float = 1100.0, - threshold: float = 0.05, - keep_zeros: bool = False, - device = None, - model: Literal["full", "tiny"] = "full", - use_fast_filters: bool = True, - ): - super().__init__(hop_length, f0_min, f0_max, keep_zeros) - - self.threshold = threshold - self.model = model - self.use_fast_filters = use_fast_filters - self.hop_length = hop_length - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - if self.use_fast_filters: - self.median_filter = MaskedMedianPool1d(3, 1, 1).to(device) - self.mean_filter = MaskedAvgPool1d(3, 1, 1).to(device) - - def __call__(self, x, sampling_rate=44100, pad_to=None): - """Extract pitch using crepe. - - - Args: - x (torch.Tensor): Audio signal, shape (1, T). - sampling_rate (int, optional): Sampling rate. Defaults to 44100. - pad_to (int, optional): Pad to length. Defaults to None. - - Returns: - torch.Tensor: Pitch, shape (T // hop_length,). - """ - - assert x.ndim == 2, f"Expected 2D tensor, got {x.ndim}D tensor." - assert x.shape[0] == 1, f"Expected 1 channel, got {x.shape[0]} channels." - - x = x.to(self.dev) - f0, pd = torchcrepe.predict( - x, - sampling_rate, - self.hop_length, - self.f0_min, - self.f0_max, - pad=True, - model=self.model, - batch_size=1024, - device=x.device, - return_periodicity=True, - ) - - # Filter, remove silence, set uv threshold, refer to the original warehouse readme - if self.use_fast_filters: - pd = self.median_filter(pd) - else: - pd = torchcrepe.filter.median(pd, 3) - - pd = torchcrepe.threshold.Silence(-60.0)(pd, x, sampling_rate, 512) - f0 = torchcrepe.threshold.At(self.threshold)(f0, pd) - - if self.use_fast_filters: - f0 = self.mean_filter(f0) - else: - f0 = torchcrepe.filter.mean(f0, 3) - - f0 = torch.where(torch.isnan(f0), torch.full_like(f0, 0), f0)[0] - - return self.post_process(x, sampling_rate, f0, pad_to) diff --git a/spaces/zomehwh/sovits-tannhauser/hubert/__init__.py b/spaces/zomehwh/sovits-tannhauser/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zox47/succinctly-text2image-prompt-generator/app.py b/spaces/zox47/succinctly-text2image-prompt-generator/app.py deleted file mode 100644 index 6236186cf4e23d7670a3ed158d005e5c98358b28..0000000000000000000000000000000000000000 --- a/spaces/zox47/succinctly-text2image-prompt-generator/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/succinctly/text2image-prompt-generator").launch() \ No newline at end of file